Mar 17 17:46:00.895058 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:46:00.895085 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:46:00.895096 kernel: KASLR enabled Mar 17 17:46:00.895102 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 17 17:46:00.895108 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Mar 17 17:46:00.895114 kernel: random: crng init done Mar 17 17:46:00.895121 kernel: secureboot: Secure boot disabled Mar 17 17:46:00.895127 kernel: ACPI: Early table checksum verification disabled Mar 17 17:46:00.895133 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 17 17:46:00.895142 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:46:00.895148 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895154 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895160 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895166 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895173 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895181 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895187 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895194 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895200 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:00.895206 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:46:00.895212 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 17 17:46:00.895219 kernel: NUMA: Failed to initialise from firmware Mar 17 17:46:00.895225 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:46:00.895232 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 17 17:46:00.895238 kernel: Zone ranges: Mar 17 17:46:00.895247 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:46:00.895254 kernel: DMA32 empty Mar 17 17:46:00.895260 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 17 17:46:00.895266 kernel: Movable zone start for each node Mar 17 17:46:00.895272 kernel: Early memory node ranges Mar 17 17:46:00.895278 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Mar 17 17:46:00.895288 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Mar 17 17:46:00.895295 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Mar 17 17:46:00.895302 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 17 17:46:00.895310 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 17 17:46:00.895316 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 17 17:46:00.895323 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 17 17:46:00.895331 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 17 17:46:00.895338 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 17 17:46:00.895345 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:46:00.895399 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 17 17:46:00.895407 kernel: psci: probing for conduit method from ACPI. Mar 17 17:46:00.895413 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:46:00.895423 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:46:00.895430 kernel: psci: Trusted OS migration not required Mar 17 17:46:00.895436 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:46:00.895443 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:46:00.895449 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:46:00.895456 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:46:00.895463 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:46:00.895469 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:46:00.895476 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:46:00.895482 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:46:00.895490 kernel: CPU features: detected: Spectre-v4 Mar 17 17:46:00.895497 kernel: CPU features: detected: Spectre-BHB Mar 17 17:46:00.895503 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:46:00.895510 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:46:00.895516 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:46:00.895523 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:46:00.895529 kernel: alternatives: applying boot alternatives Mar 17 17:46:00.895537 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:46:00.895544 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:46:00.895551 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:46:00.895558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:46:00.895566 kernel: Fallback order for Node 0: 0 Mar 17 17:46:00.895573 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 17 17:46:00.895579 kernel: Policy zone: Normal Mar 17 17:46:00.895586 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:46:00.895593 kernel: software IO TLB: area num 2. Mar 17 17:46:00.895600 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 17 17:46:00.895607 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Mar 17 17:46:00.895614 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:46:00.895621 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:46:00.895629 kernel: rcu: RCU event tracing is enabled. Mar 17 17:46:00.895635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:46:00.895643 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:46:00.895652 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:46:00.895660 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:46:00.895667 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:46:00.895674 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:46:00.895681 kernel: GICv3: 256 SPIs implemented Mar 17 17:46:00.895688 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:46:00.895695 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:46:00.895702 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:46:00.895710 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:46:00.895717 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:46:00.895724 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:46:00.895733 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:46:00.895740 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 17 17:46:00.895747 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 17 17:46:00.895754 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:46:00.895761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:46:00.895769 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:46:00.895776 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:46:00.895783 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:46:00.895790 kernel: Console: colour dummy device 80x25 Mar 17 17:46:00.895798 kernel: ACPI: Core revision 20230628 Mar 17 17:46:00.895806 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:46:00.895815 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:46:00.895823 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:46:00.895830 kernel: landlock: Up and running. Mar 17 17:46:00.895838 kernel: SELinux: Initializing. Mar 17 17:46:00.895845 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:46:00.895852 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:46:00.895860 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:46:00.895867 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:46:00.895875 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:46:00.895885 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:46:00.895892 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:46:00.895900 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:46:00.895916 kernel: Remapping and enabling EFI services. Mar 17 17:46:00.895924 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:46:00.895932 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:46:00.895940 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:46:00.895948 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 17 17:46:00.895955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:46:00.895965 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:46:00.895973 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:46:00.895986 kernel: SMP: Total of 2 processors activated. Mar 17 17:46:00.895995 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:46:00.896003 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:46:00.896010 kernel: CPU features: detected: Common not Private translations Mar 17 17:46:00.896018 kernel: CPU features: detected: CRC32 instructions Mar 17 17:46:00.896026 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:46:00.896034 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:46:00.896044 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:46:00.896051 kernel: CPU features: detected: Privileged Access Never Mar 17 17:46:00.896059 kernel: CPU features: detected: RAS Extension Support Mar 17 17:46:00.896066 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:46:00.896074 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:46:00.896082 kernel: alternatives: applying system-wide alternatives Mar 17 17:46:00.896090 kernel: devtmpfs: initialized Mar 17 17:46:00.896097 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:46:00.896107 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:46:00.896114 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:46:00.896122 kernel: SMBIOS 3.0.0 present. Mar 17 17:46:00.896129 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 17 17:46:00.896137 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:46:00.896144 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:46:00.896152 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:46:00.896159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:46:00.896166 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:46:00.896176 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Mar 17 17:46:00.896184 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:46:00.896191 kernel: cpuidle: using governor menu Mar 17 17:46:00.896198 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:46:00.896206 kernel: ASID allocator initialised with 32768 entries Mar 17 17:46:00.896213 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:46:00.896221 kernel: Serial: AMBA PL011 UART driver Mar 17 17:46:00.896228 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:46:00.896236 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:46:00.896245 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:46:00.896253 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:46:00.896261 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:46:00.896269 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:46:00.896277 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:46:00.896285 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:46:00.896292 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:46:00.896299 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:46:00.896307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:46:00.896316 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:46:00.896326 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:46:00.896333 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:46:00.896342 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:46:00.896349 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:46:00.898423 kernel: ACPI: Interpreter enabled Mar 17 17:46:00.898435 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:46:00.898443 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:46:00.898450 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:46:00.898464 kernel: printk: console [ttyAMA0] enabled Mar 17 17:46:00.898472 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:46:00.898632 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:46:00.898709 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:46:00.898788 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:46:00.898855 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:46:00.899006 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:46:00.899025 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:46:00.899033 kernel: PCI host bridge to bus 0000:00 Mar 17 17:46:00.899117 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:46:00.899185 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:46:00.899250 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:46:00.899315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:46:00.899430 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:46:00.899526 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 17 17:46:00.899604 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 17 17:46:00.899678 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:46:00.899768 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.899844 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 17 17:46:00.899978 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.900068 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 17 17:46:00.900152 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.900223 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 17 17:46:00.900306 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.901471 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 17 17:46:00.901575 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.901654 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 17 17:46:00.901748 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.901826 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 17 17:46:00.901924 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.902006 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 17 17:46:00.902089 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.902167 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 17 17:46:00.902249 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:46:00.902335 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 17 17:46:00.903533 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 17 17:46:00.903630 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 17 17:46:00.903727 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:46:00.903812 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 17 17:46:00.903892 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:46:00.903978 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:46:00.904063 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:46:00.904156 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 17 17:46:00.904242 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:46:00.904327 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 17 17:46:00.906665 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 17 17:46:00.906773 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:46:00.906860 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 17 17:46:00.906986 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:46:00.907077 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 17 17:46:00.907167 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 17 17:46:00.907264 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:46:00.907377 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 17 17:46:00.907453 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:46:00.907532 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:46:00.907618 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 17 17:46:00.907702 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 17 17:46:00.907786 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:46:00.907878 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 17 17:46:00.908023 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:46:00.908104 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:46:00.908178 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 17 17:46:00.908247 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 17 17:46:00.908315 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 17 17:46:00.908448 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 17:46:00.908542 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:46:00.908624 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:46:00.908700 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 17:46:00.908771 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 17 17:46:00.908844 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 17 17:46:00.908943 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 17:46:00.909020 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:46:00.909096 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:46:00.909185 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 17:46:00.909266 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:46:00.909339 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:46:00.909455 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:46:00.909532 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:46:00.909615 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:46:00.909700 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:46:00.909783 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:46:00.909863 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:46:00.909969 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:46:00.910051 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:46:00.910131 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:46:00.910210 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 17 17:46:00.910291 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:46:00.914461 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 17 17:46:00.914578 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:46:00.914652 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 17 17:46:00.914724 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:46:00.914801 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 17 17:46:00.914875 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:46:00.914972 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 17 17:46:00.915063 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:46:00.915138 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 17 17:46:00.915207 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:46:00.915280 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 17 17:46:00.915349 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:46:00.915539 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 17 17:46:00.915607 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:46:00.915682 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 17 17:46:00.915749 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:46:00.915822 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 17 17:46:00.915888 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 17 17:46:00.916010 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 17 17:46:00.916082 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:46:00.916152 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 17 17:46:00.916222 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:46:00.916289 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 17 17:46:00.916371 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:46:00.916446 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 17 17:46:00.916517 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 17:46:00.916584 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 17 17:46:00.916651 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 17:46:00.916720 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 17 17:46:00.916787 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 17:46:00.916857 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 17 17:46:00.916936 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 17:46:00.917012 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 17 17:46:00.917081 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 17:46:00.917151 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 17 17:46:00.917218 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 17 17:46:00.917292 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 17 17:46:00.919422 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 17 17:46:00.919527 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:46:00.919597 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 17 17:46:00.919664 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:46:00.919731 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 17:46:00.919801 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 17 17:46:00.919869 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:46:00.919968 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 17 17:46:00.920045 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:46:00.920112 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 17:46:00.920178 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 17 17:46:00.920248 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:46:00.920329 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:46:00.920424 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 17 17:46:00.920500 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:46:00.920579 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 17:46:00.920649 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 17 17:46:00.920721 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:46:00.920805 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:46:00.920888 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:46:00.921009 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 17:46:00.921086 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 17 17:46:00.921155 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:46:00.921230 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 17 17:46:00.921299 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 17 17:46:00.921389 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:46:00.921462 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 17:46:00.921529 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 17 17:46:00.921600 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:46:00.921679 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 17 17:46:00.921748 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 17 17:46:00.921825 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:46:00.921896 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 17:46:00.922013 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 17 17:46:00.922097 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:46:00.922189 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 17 17:46:00.922278 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 17 17:46:00.926440 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 17 17:46:00.926550 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:46:00.926621 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 17:46:00.926688 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 17 17:46:00.926766 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:46:00.926841 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:46:00.926945 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 17:46:00.927030 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 17 17:46:00.927112 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:46:00.927198 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:46:00.927277 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 17 17:46:00.927376 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 17 17:46:00.927466 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:46:00.927572 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:46:00.927654 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:46:00.927729 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:46:00.927831 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 17:46:00.927962 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 17 17:46:00.928049 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:46:00.928137 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 17 17:46:00.928212 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 17 17:46:00.928292 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:46:00.930481 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 17 17:46:00.930583 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 17 17:46:00.930650 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:46:00.930734 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 17 17:46:00.930806 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 17 17:46:00.930876 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:46:00.931013 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 17 17:46:00.931088 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 17 17:46:00.931157 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:46:00.931234 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 17 17:46:00.931307 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 17 17:46:00.931396 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:46:00.931477 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 17 17:46:00.931546 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 17 17:46:00.931615 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:46:00.931705 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 17 17:46:00.931774 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 17 17:46:00.931850 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:46:00.931944 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 17 17:46:00.932019 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 17 17:46:00.932088 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:46:00.932099 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:46:00.932111 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:46:00.932121 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:46:00.932129 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:46:00.932139 kernel: iommu: Default domain type: Translated Mar 17 17:46:00.932147 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:46:00.932156 kernel: efivars: Registered efivars operations Mar 17 17:46:00.932164 kernel: vgaarb: loaded Mar 17 17:46:00.932172 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:46:00.932180 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:46:00.932189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:46:00.932196 kernel: pnp: PnP ACPI init Mar 17 17:46:00.932281 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:46:00.932295 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:46:00.932303 kernel: NET: Registered PF_INET protocol family Mar 17 17:46:00.932311 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:46:00.932319 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:46:00.932327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:46:00.932335 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:46:00.932344 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:46:00.933668 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:46:00.933693 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:46:00.933702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:46:00.933711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:46:00.933834 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 17 17:46:00.933846 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:46:00.933854 kernel: kvm [1]: HYP mode not available Mar 17 17:46:00.933862 kernel: Initialise system trusted keyrings Mar 17 17:46:00.933869 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:46:00.933877 kernel: Key type asymmetric registered Mar 17 17:46:00.933887 kernel: Asymmetric key parser 'x509' registered Mar 17 17:46:00.933895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:46:00.933915 kernel: io scheduler mq-deadline registered Mar 17 17:46:00.933926 kernel: io scheduler kyber registered Mar 17 17:46:00.933934 kernel: io scheduler bfq registered Mar 17 17:46:00.933942 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:46:00.934028 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 17 17:46:00.934106 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 17 17:46:00.934194 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.934279 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 17 17:46:00.934430 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 17 17:46:00.934527 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.934613 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 17 17:46:00.934686 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 17 17:46:00.934777 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.934864 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 17 17:46:00.934979 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 17 17:46:00.935067 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.935154 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 17 17:46:00.935242 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 17 17:46:00.935318 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.936460 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 17 17:46:00.936558 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 17 17:46:00.936634 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.936707 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 17 17:46:00.936793 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 17 17:46:00.936885 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.936996 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 17 17:46:00.937085 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 17 17:46:00.937166 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.937176 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 17 17:46:00.937244 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 17 17:46:00.937333 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 17 17:46:00.939122 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:46:00.939145 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:46:00.939154 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:46:00.939163 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:46:00.939258 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 17 17:46:00.939340 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 17 17:46:00.939362 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:46:00.939379 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:46:00.939454 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 17 17:46:00.939465 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 17 17:46:00.939472 kernel: thunder_xcv, ver 1.0 Mar 17 17:46:00.939480 kernel: thunder_bgx, ver 1.0 Mar 17 17:46:00.939488 kernel: nicpf, ver 1.0 Mar 17 17:46:00.939495 kernel: nicvf, ver 1.0 Mar 17 17:46:00.939576 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:46:00.939642 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:46:00 UTC (1742233560) Mar 17 17:46:00.939655 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:46:00.939665 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:46:00.939674 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:46:00.939683 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:46:00.939692 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:46:00.939701 kernel: Segment Routing with IPv6 Mar 17 17:46:00.939710 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:46:00.939718 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:46:00.939731 kernel: Key type dns_resolver registered Mar 17 17:46:00.939740 kernel: registered taskstats version 1 Mar 17 17:46:00.939750 kernel: Loading compiled-in X.509 certificates Mar 17 17:46:00.939759 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:46:00.939767 kernel: Key type .fscrypt registered Mar 17 17:46:00.939776 kernel: Key type fscrypt-provisioning registered Mar 17 17:46:00.939784 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:46:00.939792 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:46:00.939799 kernel: ima: No architecture policies found Mar 17 17:46:00.939808 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:46:00.939816 kernel: clk: Disabling unused clocks Mar 17 17:46:00.939825 kernel: Freeing unused kernel memory: 38336K Mar 17 17:46:00.939832 kernel: Run /init as init process Mar 17 17:46:00.939840 kernel: with arguments: Mar 17 17:46:00.939848 kernel: /init Mar 17 17:46:00.939856 kernel: with environment: Mar 17 17:46:00.939863 kernel: HOME=/ Mar 17 17:46:00.939871 kernel: TERM=linux Mar 17 17:46:00.939879 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:46:00.939888 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:46:00.939899 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:46:00.939923 systemd[1]: Detected virtualization kvm. Mar 17 17:46:00.939931 systemd[1]: Detected architecture arm64. Mar 17 17:46:00.939939 systemd[1]: Running in initrd. Mar 17 17:46:00.939946 systemd[1]: No hostname configured, using default hostname. Mar 17 17:46:00.939956 systemd[1]: Hostname set to . Mar 17 17:46:00.939965 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:46:00.939973 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:46:00.939981 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:46:00.939990 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:46:00.939998 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:46:00.940006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:46:00.940014 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:46:00.940025 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:46:00.940034 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:46:00.940043 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:46:00.940051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:46:00.940059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:46:00.940067 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:46:00.940075 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:46:00.940085 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:46:00.940093 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:46:00.940101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:46:00.940109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:46:00.940117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:46:00.940125 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:46:00.940133 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:46:00.940141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:46:00.940149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:46:00.940158 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:46:00.940166 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:46:00.940174 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:46:00.940182 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:46:00.940190 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:46:00.940198 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:46:00.940207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:46:00.940215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:00.940224 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:46:00.940266 systemd-journald[236]: Collecting audit messages is disabled. Mar 17 17:46:00.940288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:46:00.940299 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:46:00.940308 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:46:00.940316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:00.940324 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:46:00.940332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:46:00.940340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:46:00.940350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:46:00.942409 kernel: Bridge firewalling registered Mar 17 17:46:00.942424 systemd-journald[236]: Journal started Mar 17 17:46:00.942448 systemd-journald[236]: Runtime Journal (/run/log/journal/91d6e64d566c4ae590d94bb9e26d7be8) is 8M, max 76.6M, 68.6M free. Mar 17 17:46:00.912142 systemd-modules-load[237]: Inserted module 'overlay' Mar 17 17:46:00.943539 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 17 17:46:00.945831 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:46:00.947587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:46:00.958521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:00.963501 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:46:00.968409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:00.970723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:46:00.979514 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:46:00.980651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:00.995036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:46:00.999678 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:46:01.009288 dracut-cmdline[269]: dracut-dracut-053 Mar 17 17:46:01.012655 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:46:01.046061 systemd-resolved[273]: Positive Trust Anchors: Mar 17 17:46:01.046080 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:46:01.046111 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:46:01.051457 systemd-resolved[273]: Defaulting to hostname 'linux'. Mar 17 17:46:01.053214 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:46:01.053964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:46:01.108387 kernel: SCSI subsystem initialized Mar 17 17:46:01.113384 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:46:01.121386 kernel: iscsi: registered transport (tcp) Mar 17 17:46:01.134419 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:46:01.134478 kernel: QLogic iSCSI HBA Driver Mar 17 17:46:01.178941 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:46:01.184609 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:46:01.201646 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:46:01.201725 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:46:01.202429 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:46:01.249422 kernel: raid6: neonx8 gen() 15673 MB/s Mar 17 17:46:01.266430 kernel: raid6: neonx4 gen() 15728 MB/s Mar 17 17:46:01.283409 kernel: raid6: neonx2 gen() 13179 MB/s Mar 17 17:46:01.300436 kernel: raid6: neonx1 gen() 10403 MB/s Mar 17 17:46:01.317407 kernel: raid6: int64x8 gen() 6732 MB/s Mar 17 17:46:01.334416 kernel: raid6: int64x4 gen() 7299 MB/s Mar 17 17:46:01.351409 kernel: raid6: int64x2 gen() 6080 MB/s Mar 17 17:46:01.368435 kernel: raid6: int64x1 gen() 5006 MB/s Mar 17 17:46:01.368549 kernel: raid6: using algorithm neonx4 gen() 15728 MB/s Mar 17 17:46:01.385482 kernel: raid6: .... xor() 12281 MB/s, rmw enabled Mar 17 17:46:01.385613 kernel: raid6: using neon recovery algorithm Mar 17 17:46:01.390631 kernel: xor: measuring software checksum speed Mar 17 17:46:01.390698 kernel: 8regs : 21618 MB/sec Mar 17 17:46:01.391489 kernel: 32regs : 21670 MB/sec Mar 17 17:46:01.391550 kernel: arm64_neon : 27832 MB/sec Mar 17 17:46:01.391570 kernel: xor: using function: arm64_neon (27832 MB/sec) Mar 17 17:46:01.442421 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:46:01.457395 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:46:01.464516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:46:01.479073 systemd-udevd[455]: Using default interface naming scheme 'v255'. Mar 17 17:46:01.483938 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:46:01.494585 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:46:01.508843 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Mar 17 17:46:01.544434 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:46:01.551644 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:46:01.602516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:46:01.614524 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:46:01.630446 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:46:01.631285 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:46:01.635438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:46:01.637104 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:46:01.645619 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:46:01.663697 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:46:01.718635 kernel: ACPI: bus type USB registered Mar 17 17:46:01.718688 kernel: usbcore: registered new interface driver usbfs Mar 17 17:46:01.720391 kernel: usbcore: registered new interface driver hub Mar 17 17:46:01.723421 kernel: usbcore: registered new device driver usb Mar 17 17:46:01.723479 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:46:01.730090 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:46:01.733402 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:46:01.749279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:46:01.750672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:01.752175 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:46:01.754391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:46:01.754557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:01.757046 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:01.767617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:01.784526 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:46:01.794075 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 17 17:46:01.802086 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:46:01.802241 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:46:01.802341 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:46:01.802453 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:46:01.802568 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 17 17:46:01.802672 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:46:01.802761 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:46:01.802772 kernel: hub 1-0:1.0: USB hub found Mar 17 17:46:01.802887 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:46:01.803038 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:46:01.803185 kernel: hub 2-0:1.0: USB hub found Mar 17 17:46:01.803294 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:46:01.803412 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:46:01.786950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:01.795592 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:46:01.811439 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 17 17:46:01.822524 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:46:01.822773 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 17 17:46:01.823104 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 17 17:46:01.823339 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:46:01.823638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:46:01.823666 kernel: GPT:17805311 != 80003071 Mar 17 17:46:01.823695 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:46:01.823718 kernel: GPT:17805311 != 80003071 Mar 17 17:46:01.823740 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:46:01.823763 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:46:01.823787 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 17 17:46:01.829674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:01.867401 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (520) Mar 17 17:46:01.879702 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (525) Mar 17 17:46:01.900196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:46:01.912749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:46:01.924036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:46:01.931372 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:46:01.932028 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:46:01.942545 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:46:01.951053 disk-uuid[575]: Primary Header is updated. Mar 17 17:46:01.951053 disk-uuid[575]: Secondary Entries is updated. Mar 17 17:46:01.951053 disk-uuid[575]: Secondary Header is updated. Mar 17 17:46:01.960846 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:46:02.031397 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:46:02.273428 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 17 17:46:02.408496 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 17 17:46:02.408682 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:46:02.409149 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 17 17:46:02.463401 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 17 17:46:02.463640 kernel: usbcore: registered new interface driver usbhid Mar 17 17:46:02.464388 kernel: usbhid: USB HID core driver Mar 17 17:46:02.973429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:46:02.973868 disk-uuid[576]: The operation has completed successfully. Mar 17 17:46:03.027220 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:46:03.027336 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:46:03.063684 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:46:03.070271 sh[591]: Success Mar 17 17:46:03.086383 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:46:03.147841 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:46:03.163570 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:46:03.165177 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:46:03.188958 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:46:03.189021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:03.190059 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:46:03.190802 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:46:03.191376 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:46:03.197397 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:46:03.199110 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:46:03.200393 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:46:03.209663 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:46:03.213611 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:46:03.226424 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:03.226504 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:03.226531 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:46:03.233474 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:46:03.233548 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:46:03.244385 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:03.244087 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:46:03.250310 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:46:03.257580 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:46:03.344826 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:46:03.354602 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:46:03.366075 ignition[682]: Ignition 2.20.0 Mar 17 17:46:03.367001 ignition[682]: Stage: fetch-offline Mar 17 17:46:03.367062 ignition[682]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:03.368568 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:46:03.367076 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:46:03.367275 ignition[682]: parsed url from cmdline: "" Mar 17 17:46:03.367281 ignition[682]: no config URL provided Mar 17 17:46:03.367289 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:46:03.367303 ignition[682]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:46:03.367310 ignition[682]: failed to fetch config: resource requires networking Mar 17 17:46:03.367578 ignition[682]: Ignition finished successfully Mar 17 17:46:03.380930 systemd-networkd[780]: lo: Link UP Mar 17 17:46:03.380940 systemd-networkd[780]: lo: Gained carrier Mar 17 17:46:03.383200 systemd-networkd[780]: Enumeration completed Mar 17 17:46:03.383855 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:03.383859 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:46:03.384503 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:46:03.385311 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:03.385315 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:46:03.385962 systemd[1]: Reached target network.target - Network. Mar 17 17:46:03.386083 systemd-networkd[780]: eth0: Link UP Mar 17 17:46:03.386087 systemd-networkd[780]: eth0: Gained carrier Mar 17 17:46:03.386102 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:03.391695 systemd-networkd[780]: eth1: Link UP Mar 17 17:46:03.391700 systemd-networkd[780]: eth1: Gained carrier Mar 17 17:46:03.391708 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:03.392463 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:46:03.408435 ignition[784]: Ignition 2.20.0 Mar 17 17:46:03.408444 ignition[784]: Stage: fetch Mar 17 17:46:03.408622 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:03.408633 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:46:03.408754 ignition[784]: parsed url from cmdline: "" Mar 17 17:46:03.408758 ignition[784]: no config URL provided Mar 17 17:46:03.408763 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:46:03.408771 ignition[784]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:46:03.408861 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:46:03.409727 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:46:03.427499 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:46:03.452501 systemd-networkd[780]: eth0: DHCPv4 address 138.199.148.212/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:46:03.609860 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:46:03.616155 ignition[784]: GET result: OK Mar 17 17:46:03.616292 ignition[784]: parsing config with SHA512: 9e7be9a9ae6a786c24311babb6aabee3ba77d491bb511f32328b5d0e3c6c5fa3209c30042bee2e9a13af818b27b4540bcdcf1f74bb3fcd6899261aeb40183f8d Mar 17 17:46:03.623044 unknown[784]: fetched base config from "system" Mar 17 17:46:03.623060 unknown[784]: fetched base config from "system" Mar 17 17:46:03.623615 ignition[784]: fetch: fetch complete Mar 17 17:46:03.623066 unknown[784]: fetched user config from "hetzner" Mar 17 17:46:03.623620 ignition[784]: fetch: fetch passed Mar 17 17:46:03.623666 ignition[784]: Ignition finished successfully Mar 17 17:46:03.626161 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:46:03.633179 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:46:03.649995 ignition[792]: Ignition 2.20.0 Mar 17 17:46:03.650008 ignition[792]: Stage: kargs Mar 17 17:46:03.650173 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:03.650182 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:46:03.651779 ignition[792]: kargs: kargs passed Mar 17 17:46:03.651830 ignition[792]: Ignition finished successfully Mar 17 17:46:03.653461 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:46:03.658583 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:46:03.670791 ignition[798]: Ignition 2.20.0 Mar 17 17:46:03.670803 ignition[798]: Stage: disks Mar 17 17:46:03.671057 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:03.671068 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:46:03.674477 ignition[798]: disks: disks passed Mar 17 17:46:03.674538 ignition[798]: Ignition finished successfully Mar 17 17:46:03.677404 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:46:03.678099 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:46:03.679929 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:46:03.681414 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:46:03.682525 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:46:03.683544 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:46:03.688574 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:46:03.707598 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:46:03.711562 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:46:04.196531 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:46:04.245556 kernel: EXT4-fs (sda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:46:04.246614 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:46:04.248092 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:46:04.255508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:46:04.260734 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:46:04.264528 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:46:04.270555 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:46:04.273843 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) Mar 17 17:46:04.273866 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:04.270592 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:46:04.276417 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:04.276437 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:46:04.281809 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:46:04.281868 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:46:04.280779 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:46:04.286983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:46:04.297658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:46:04.354064 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:46:04.354977 coreos-metadata[817]: Mar 17 17:46:04.354 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:46:04.357173 coreos-metadata[817]: Mar 17 17:46:04.357 INFO Fetch successful Mar 17 17:46:04.358217 coreos-metadata[817]: Mar 17 17:46:04.358 INFO wrote hostname ci-4230-1-0-b-a06069b96b to /sysroot/etc/hostname Mar 17 17:46:04.361806 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:46:04.361111 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:46:04.366129 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:46:04.370711 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:46:04.468409 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:46:04.472506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:46:04.477804 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:46:04.484404 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:04.507121 ignition[932]: INFO : Ignition 2.20.0 Mar 17 17:46:04.507121 ignition[932]: INFO : Stage: mount Mar 17 17:46:04.507121 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:04.507121 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:46:04.507121 ignition[932]: INFO : mount: mount passed Mar 17 17:46:04.507121 ignition[932]: INFO : Ignition finished successfully Mar 17 17:46:04.509103 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:46:04.514835 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:46:04.518571 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:46:05.028685 systemd-networkd[780]: eth0: Gained IPv6LL Mar 17 17:46:05.188338 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:46:05.196762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:46:05.206392 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Mar 17 17:46:05.210768 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:05.210827 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:05.211564 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:46:05.215381 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:46:05.215437 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:46:05.218333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:46:05.238966 ignition[960]: INFO : Ignition 2.20.0 Mar 17 17:46:05.238966 ignition[960]: INFO : Stage: files Mar 17 17:46:05.240125 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:05.240125 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:46:05.242138 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:46:05.242138 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:46:05.242138 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:46:05.247201 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:46:05.247201 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:46:05.247201 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:46:05.245532 unknown[960]: wrote ssh authorized keys file for user: core Mar 17 17:46:05.251142 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:46:05.251142 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:46:05.335660 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:46:05.412607 systemd-networkd[780]: eth1: Gained IPv6LL Mar 17 17:46:05.855386 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:05.857295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:46:06.415460 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:46:06.769301 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:06.769301 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:46:06.772101 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:46:06.775461 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:46:06.775461 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:46:06.775461 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:46:06.775461 ignition[960]: INFO : files: files passed Mar 17 17:46:06.775461 ignition[960]: INFO : Ignition finished successfully Mar 17 17:46:06.776052 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:46:06.784823 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:46:06.787734 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:46:06.791193 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:46:06.791287 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:46:06.802091 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:46:06.802091 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:46:06.804534 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:46:06.805597 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:46:06.806567 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:46:06.814941 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:46:06.840515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:46:06.840709 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:46:06.843149 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:46:06.844531 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:46:06.845561 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:46:06.847002 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:46:06.864800 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:46:06.873681 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:46:06.890557 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:46:06.892101 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:46:06.892841 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:46:06.894133 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:46:06.894266 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:46:06.896162 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:46:06.896794 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:46:06.898221 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:46:06.899295 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:46:06.900330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:46:06.901382 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:46:06.902414 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:46:06.903591 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:46:06.904557 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:46:06.905637 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:46:06.906517 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:46:06.906642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:46:06.907834 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:46:06.908446 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:46:06.909442 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:46:06.911375 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:46:06.912095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:46:06.912207 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:46:06.913713 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:46:06.913821 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:46:06.915002 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:46:06.915089 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:46:06.916028 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:46:06.916119 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:46:06.925031 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:46:06.930645 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:46:06.931165 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:46:06.931898 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:46:06.933799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:46:06.936200 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:46:06.943567 ignition[1012]: INFO : Ignition 2.20.0 Mar 17 17:46:06.944672 ignition[1012]: INFO : Stage: umount Mar 17 17:46:06.944672 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:06.944672 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:46:06.947551 ignition[1012]: INFO : umount: umount passed Mar 17 17:46:06.947551 ignition[1012]: INFO : Ignition finished successfully Mar 17 17:46:06.945345 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:46:06.945552 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:46:06.949294 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:46:06.949450 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:46:06.950091 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:46:06.950140 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:46:06.953448 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:46:06.953499 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:46:06.955505 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:46:06.955559 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:46:06.956548 systemd[1]: Stopped target network.target - Network. Mar 17 17:46:06.957290 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:46:06.957341 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:46:06.960226 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:46:06.961100 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:46:06.966492 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:46:06.967989 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:46:06.969956 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:46:06.974119 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:46:06.974173 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:46:06.975434 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:46:06.975499 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:46:06.977975 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:46:06.978042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:46:06.979143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:46:06.979189 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:46:06.981249 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:46:06.982197 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:46:06.984065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:46:06.993733 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:46:06.993936 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:46:07.000921 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:46:07.001148 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:46:07.001257 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:46:07.004538 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:46:07.005320 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:46:07.006394 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:46:07.011598 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:46:07.012130 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:46:07.012190 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:46:07.013049 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:46:07.013096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:07.014438 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:46:07.014487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:46:07.015665 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:46:07.015710 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:46:07.019415 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:46:07.024037 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:46:07.024107 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:46:07.025049 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:46:07.025162 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:46:07.032175 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:46:07.032279 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:46:07.037667 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:46:07.037789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:46:07.040413 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:46:07.040556 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:46:07.042548 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:46:07.042631 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:46:07.043807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:46:07.043841 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:46:07.045007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:46:07.045057 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:46:07.046407 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:46:07.046458 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:46:07.047741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:46:07.047783 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:07.054576 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:46:07.055148 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:46:07.055209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:46:07.059095 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:46:07.059148 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:46:07.059892 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:46:07.059937 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:46:07.062091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:46:07.062136 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:07.066249 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:46:07.066314 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:46:07.067049 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:46:07.068550 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:46:07.069420 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:46:07.081166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:46:07.091349 systemd[1]: Switching root. Mar 17 17:46:07.122225 systemd-journald[236]: Journal stopped Mar 17 17:46:08.018686 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 17 17:46:08.018757 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:46:08.018776 kernel: SELinux: policy capability open_perms=1 Mar 17 17:46:08.018787 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:46:08.018797 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:46:08.018808 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:46:08.018824 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:46:08.018834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:46:08.018845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:46:08.018890 kernel: audit: type=1403 audit(1742233567.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:46:08.018904 systemd[1]: Successfully loaded SELinux policy in 37.320ms. Mar 17 17:46:08.018923 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.822ms. Mar 17 17:46:08.018936 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:46:08.018948 systemd[1]: Detected virtualization kvm. Mar 17 17:46:08.018963 systemd[1]: Detected architecture arm64. Mar 17 17:46:08.018975 systemd[1]: Detected first boot. Mar 17 17:46:08.018988 systemd[1]: Hostname set to . Mar 17 17:46:08.019001 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:46:08.019018 zram_generator::config[1056]: No configuration found. Mar 17 17:46:08.019038 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:46:08.019052 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:46:08.019064 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:46:08.019077 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:46:08.019089 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:46:08.019101 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:46:08.019117 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:46:08.019132 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:46:08.019144 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:46:08.019156 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:46:08.019167 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:46:08.019179 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:46:08.019191 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:46:08.019203 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:46:08.019215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:46:08.019229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:46:08.019241 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:46:08.019253 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:46:08.019265 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:46:08.019277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:46:08.019288 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:46:08.019300 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:46:08.019314 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:46:08.019326 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:46:08.019337 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:46:08.019349 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:46:08.021415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:46:08.021450 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:46:08.021462 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:46:08.021472 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:46:08.021487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:46:08.021497 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:46:08.021507 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:46:08.021518 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:46:08.021530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:46:08.021544 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:46:08.021556 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:46:08.021567 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:46:08.021577 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:46:08.021587 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:46:08.021597 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:46:08.021607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:46:08.021616 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:46:08.021628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:46:08.021640 systemd[1]: Reached target machines.target - Containers. Mar 17 17:46:08.021650 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:46:08.021660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:46:08.021671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:46:08.021690 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:46:08.021700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:46:08.021710 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:46:08.021721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:46:08.021731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:46:08.021743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:46:08.021753 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:46:08.021763 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:46:08.021773 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:46:08.021783 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:46:08.021794 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:46:08.021804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:46:08.021814 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:46:08.021826 kernel: fuse: init (API version 7.39) Mar 17 17:46:08.021837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:46:08.021847 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:46:08.021908 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:46:08.021920 kernel: ACPI: bus type drm_connector registered Mar 17 17:46:08.021933 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:46:08.021944 kernel: loop: module loaded Mar 17 17:46:08.021977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:46:08.021988 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:46:08.021998 systemd[1]: Stopped verity-setup.service. Mar 17 17:46:08.022008 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:46:08.022018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:46:08.022031 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:46:08.022041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:46:08.022086 systemd-journald[1121]: Collecting audit messages is disabled. Mar 17 17:46:08.022110 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:46:08.022120 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:46:08.022131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:46:08.022143 systemd-journald[1121]: Journal started Mar 17 17:46:08.022165 systemd-journald[1121]: Runtime Journal (/run/log/journal/91d6e64d566c4ae590d94bb9e26d7be8) is 8M, max 76.6M, 68.6M free. Mar 17 17:46:07.784316 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:46:07.794834 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:46:07.795720 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:46:08.023944 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:46:08.026390 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:46:08.026535 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:46:08.029072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:46:08.030508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:46:08.031383 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:46:08.031655 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:46:08.032504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:46:08.032667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:46:08.033659 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:46:08.033810 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:46:08.035708 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:46:08.035881 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:46:08.036728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:46:08.039681 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:46:08.040610 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:46:08.048162 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:46:08.054683 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:46:08.061802 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:46:08.071475 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:46:08.072092 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:46:08.072151 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:46:08.075850 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:46:08.079567 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:46:08.081260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:46:08.081984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:46:08.086563 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:46:08.088850 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:46:08.091153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:46:08.094545 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:46:08.096096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:46:08.100536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:08.106145 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:46:08.112994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:46:08.118013 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:46:08.119109 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:46:08.120064 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:46:08.121072 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:46:08.131725 systemd-journald[1121]: Time spent on flushing to /var/log/journal/91d6e64d566c4ae590d94bb9e26d7be8 is 48.244ms for 1141 entries. Mar 17 17:46:08.131725 systemd-journald[1121]: System Journal (/var/log/journal/91d6e64d566c4ae590d94bb9e26d7be8) is 8M, max 584.8M, 576.8M free. Mar 17 17:46:08.222710 systemd-journald[1121]: Received client request to flush runtime journal. Mar 17 17:46:08.222765 kernel: loop0: detected capacity change from 0 to 8 Mar 17 17:46:08.222787 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:46:08.222801 kernel: loop1: detected capacity change from 0 to 123192 Mar 17 17:46:08.150447 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:46:08.153759 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:46:08.166758 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:46:08.167813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:46:08.180345 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:46:08.184380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:08.215848 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 17 17:46:08.215902 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 17 17:46:08.222450 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:46:08.224503 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:46:08.229684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:46:08.238203 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:46:08.255456 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:46:08.269527 kernel: loop2: detected capacity change from 0 to 113512 Mar 17 17:46:08.302259 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:46:08.310445 kernel: loop3: detected capacity change from 0 to 194096 Mar 17 17:46:08.318897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:46:08.347142 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Mar 17 17:46:08.347158 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Mar 17 17:46:08.356368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:46:08.365661 kernel: loop4: detected capacity change from 0 to 8 Mar 17 17:46:08.371385 kernel: loop5: detected capacity change from 0 to 123192 Mar 17 17:46:08.391404 kernel: loop6: detected capacity change from 0 to 113512 Mar 17 17:46:08.413413 kernel: loop7: detected capacity change from 0 to 194096 Mar 17 17:46:08.441007 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:46:08.441895 (sd-merge)[1205]: Merged extensions into '/usr'. Mar 17 17:46:08.446402 systemd[1]: Reload requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:46:08.446525 systemd[1]: Reloading... Mar 17 17:46:08.569397 zram_generator::config[1235]: No configuration found. Mar 17 17:46:08.627079 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:46:08.705922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:46:08.767996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:46:08.768473 systemd[1]: Reloading finished in 321 ms. Mar 17 17:46:08.799399 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:46:08.803384 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:46:08.816917 systemd[1]: Starting ensure-sysext.service... Mar 17 17:46:08.822661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:46:08.830200 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:46:08.842596 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:46:08.849524 systemd[1]: Reload requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:46:08.849544 systemd[1]: Reloading... Mar 17 17:46:08.851417 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:46:08.851973 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:46:08.852994 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:46:08.853205 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Mar 17 17:46:08.853247 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Mar 17 17:46:08.858704 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:46:08.858734 systemd-tmpfiles[1271]: Skipping /boot Mar 17 17:46:08.869247 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:46:08.869259 systemd-tmpfiles[1271]: Skipping /boot Mar 17 17:46:08.897908 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Mar 17 17:46:08.937401 zram_generator::config[1300]: No configuration found. Mar 17 17:46:09.101341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:46:09.146772 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1326) Mar 17 17:46:09.172394 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:46:09.183082 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:46:09.183303 systemd[1]: Reloading finished in 333 ms. Mar 17 17:46:09.200121 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:46:09.202303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:46:09.293393 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 17 17:46:09.295405 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:46:09.296047 kernel: [drm] features: -context_init Mar 17 17:46:09.296073 kernel: [drm] number of scanouts: 1 Mar 17 17:46:09.294971 systemd[1]: Finished ensure-sysext.service. Mar 17 17:46:09.297251 kernel: [drm] number of cap sets: 0 Mar 17 17:46:09.298417 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 17:46:09.300891 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:46:09.304050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:46:09.306463 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:46:09.311389 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:46:09.317720 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:46:09.321554 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:46:09.323550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:46:09.326567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:46:09.329550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:46:09.332871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:46:09.338562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:46:09.342754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:46:09.346705 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:46:09.347898 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:46:09.350349 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:46:09.356526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:46:09.365595 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:46:09.377542 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:46:09.391264 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:46:09.394269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:09.396337 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:46:09.397677 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:46:09.398655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:46:09.398807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:46:09.400779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:46:09.400963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:46:09.402547 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:46:09.402697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:46:09.414021 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:46:09.415087 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:46:09.425965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:46:09.431418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:46:09.431670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:46:09.438894 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:46:09.445731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:46:09.446823 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:46:09.448176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:46:09.451749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:09.454399 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:46:09.457719 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:46:09.459839 augenrules[1427]: No rules Mar 17 17:46:09.466227 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:46:09.466530 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:46:09.476441 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:46:09.485603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:09.488825 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:46:09.493520 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:46:09.507785 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:46:09.523524 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:46:09.561470 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:46:09.562317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:46:09.579220 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:46:09.581186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:09.591424 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:46:09.624785 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:46:09.637061 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:46:09.638897 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:46:09.642536 systemd-networkd[1397]: lo: Link UP Mar 17 17:46:09.643622 systemd-networkd[1397]: lo: Gained carrier Mar 17 17:46:09.645684 systemd-networkd[1397]: Enumeration completed Mar 17 17:46:09.645782 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:46:09.646870 systemd-resolved[1398]: Positive Trust Anchors: Mar 17 17:46:09.647124 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:46:09.647205 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:46:09.647559 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:09.647571 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:46:09.648689 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:09.648702 systemd-networkd[1397]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:46:09.649626 systemd-networkd[1397]: eth0: Link UP Mar 17 17:46:09.649636 systemd-networkd[1397]: eth0: Gained carrier Mar 17 17:46:09.649650 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:09.651753 systemd-resolved[1398]: Using system hostname 'ci-4230-1-0-b-a06069b96b'. Mar 17 17:46:09.653556 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:46:09.654736 systemd-networkd[1397]: eth1: Link UP Mar 17 17:46:09.654744 systemd-networkd[1397]: eth1: Gained carrier Mar 17 17:46:09.654765 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:09.657494 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:46:09.658806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:46:09.660248 systemd[1]: Reached target network.target - Network. Mar 17 17:46:09.661549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:46:09.663449 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:46:09.664349 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:46:09.665035 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:46:09.665838 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:46:09.666509 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:46:09.667269 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:46:09.669565 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:46:09.669625 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:46:09.670596 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:46:09.673328 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:46:09.677474 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:46:09.678436 systemd-networkd[1397]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:46:09.680325 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 17 17:46:09.682031 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:46:09.682940 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:46:09.683623 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:46:09.694674 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:46:09.696745 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:46:09.699749 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:46:09.700606 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:46:09.701839 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:46:09.702530 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:46:09.703112 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:46:09.703151 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:46:09.707503 systemd-networkd[1397]: eth0: DHCPv4 address 138.199.148.212/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:46:09.708016 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 17 17:46:09.708618 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 17 17:46:09.709623 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:46:09.714143 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:46:09.718581 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:46:09.723584 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:46:09.732839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:46:09.733399 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:46:09.741793 jq[1467]: false Mar 17 17:46:09.742590 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:46:09.748528 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:46:09.752462 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:46:09.756867 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:46:09.761561 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:46:09.767820 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:46:09.771453 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:46:09.772016 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:46:09.773542 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:46:09.779376 extend-filesystems[1468]: Found loop4 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found loop5 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found loop6 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found loop7 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda1 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda2 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda3 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found usr Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda4 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda6 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda7 Mar 17 17:46:09.779376 extend-filesystems[1468]: Found sda9 Mar 17 17:46:09.779376 extend-filesystems[1468]: Checking size of /dev/sda9 Mar 17 17:46:09.829932 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:46:09.781208 dbus-daemon[1464]: [system] SELinux support is enabled Mar 17 17:46:09.830807 coreos-metadata[1463]: Mar 17 17:46:09.800 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:46:09.830807 coreos-metadata[1463]: Mar 17 17:46:09.803 INFO Fetch successful Mar 17 17:46:09.830807 coreos-metadata[1463]: Mar 17 17:46:09.803 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:46:09.830807 coreos-metadata[1463]: Mar 17 17:46:09.804 INFO Fetch successful Mar 17 17:46:09.831445 extend-filesystems[1468]: Resized partition /dev/sda9 Mar 17 17:46:09.780107 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:46:09.834188 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:46:09.781815 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:46:09.790798 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:46:09.837224 jq[1484]: true Mar 17 17:46:09.793428 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:46:09.793768 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:46:09.793974 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:46:09.800265 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:46:09.800310 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:46:09.803116 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:46:09.803137 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:46:09.830635 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:46:09.830824 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:46:09.847009 update_engine[1482]: I20250317 17:46:09.846802 1482 main.cc:92] Flatcar Update Engine starting Mar 17 17:46:09.850655 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:46:09.852629 update_engine[1482]: I20250317 17:46:09.852438 1482 update_check_scheduler.cc:74] Next update check in 8m22s Mar 17 17:46:09.859599 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:46:09.865588 jq[1499]: true Mar 17 17:46:09.863222 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:46:09.883048 tar[1488]: linux-arm64/helm Mar 17 17:46:09.937677 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1310) Mar 17 17:46:09.954972 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:46:09.969994 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:46:09.969994 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:46:09.969994 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:46:09.973183 extend-filesystems[1468]: Resized filesystem in /dev/sda9 Mar 17 17:46:09.973183 extend-filesystems[1468]: Found sr0 Mar 17 17:46:09.971669 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:46:09.971899 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:46:09.981520 systemd-logind[1479]: New seat seat0. Mar 17 17:46:09.984493 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:46:09.984509 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 17 17:46:09.984826 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:46:09.995618 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:46:09.996622 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:46:10.013160 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:46:10.016725 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:46:10.033765 systemd[1]: Starting sshkeys.service... Mar 17 17:46:10.046052 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:46:10.055755 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:46:10.101957 coreos-metadata[1538]: Mar 17 17:46:10.101 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:46:10.103991 coreos-metadata[1538]: Mar 17 17:46:10.103 INFO Fetch successful Mar 17 17:46:10.107204 unknown[1538]: wrote ssh authorized keys file for user: core Mar 17 17:46:10.136459 update-ssh-keys[1549]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:46:10.137926 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:46:10.141741 systemd[1]: Finished sshkeys.service. Mar 17 17:46:10.223556 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:46:10.290831 containerd[1502]: time="2025-03-17T17:46:10.289697160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:46:10.361555 containerd[1502]: time="2025-03-17T17:46:10.361013160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:10.362538 containerd[1502]: time="2025-03-17T17:46:10.362504080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363464760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363493640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363654360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363673720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363735440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363747200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363964920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363980280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.363992480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.364001920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.364070560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:10.364461 containerd[1502]: time="2025-03-17T17:46:10.364245600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:10.365653 containerd[1502]: time="2025-03-17T17:46:10.365627800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:10.365725 containerd[1502]: time="2025-03-17T17:46:10.365712880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:46:10.365917 containerd[1502]: time="2025-03-17T17:46:10.365899400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:46:10.366296 containerd[1502]: time="2025-03-17T17:46:10.366278080Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:46:10.372480 containerd[1502]: time="2025-03-17T17:46:10.372441880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:46:10.372662 containerd[1502]: time="2025-03-17T17:46:10.372648200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:46:10.372765 containerd[1502]: time="2025-03-17T17:46:10.372752720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:46:10.372901 containerd[1502]: time="2025-03-17T17:46:10.372881320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:46:10.373382 containerd[1502]: time="2025-03-17T17:46:10.373206000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:46:10.373824 containerd[1502]: time="2025-03-17T17:46:10.373681680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374577200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374702000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374717400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374730720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374743320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374757000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374768600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374781840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374798000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374811680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374823640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374835440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374902120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375390 containerd[1502]: time="2025-03-17T17:46:10.374915040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.374927280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.374940480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.374955840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.374968080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.374979120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.374991320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375003840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375017880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375028760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375041120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375052920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375067560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375087640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375100480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.375657 containerd[1502]: time="2025-03-17T17:46:10.375111520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377648880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377682440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377703040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377716960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377725840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377738600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377747880Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:46:10.379265 containerd[1502]: time="2025-03-17T17:46:10.377758720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:46:10.379506 containerd[1502]: time="2025-03-17T17:46:10.378141600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:46:10.379506 containerd[1502]: time="2025-03-17T17:46:10.378189400Z" level=info msg="Connect containerd service" Mar 17 17:46:10.379506 containerd[1502]: time="2025-03-17T17:46:10.378226600Z" level=info msg="using legacy CRI server" Mar 17 17:46:10.379506 containerd[1502]: time="2025-03-17T17:46:10.378233160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:46:10.379506 containerd[1502]: time="2025-03-17T17:46:10.378481880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:46:10.381455 containerd[1502]: time="2025-03-17T17:46:10.381428560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:46:10.382306 containerd[1502]: time="2025-03-17T17:46:10.382286120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:46:10.382949 containerd[1502]: time="2025-03-17T17:46:10.382920560Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:46:10.383181 containerd[1502]: time="2025-03-17T17:46:10.383156200Z" level=info msg="Start subscribing containerd event" Mar 17 17:46:10.383410 containerd[1502]: time="2025-03-17T17:46:10.383395200Z" level=info msg="Start recovering state" Mar 17 17:46:10.383646 containerd[1502]: time="2025-03-17T17:46:10.383538600Z" level=info msg="Start event monitor" Mar 17 17:46:10.383714 containerd[1502]: time="2025-03-17T17:46:10.383693920Z" level=info msg="Start snapshots syncer" Mar 17 17:46:10.383914 containerd[1502]: time="2025-03-17T17:46:10.383762800Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:46:10.383914 containerd[1502]: time="2025-03-17T17:46:10.383775160Z" level=info msg="Start streaming server" Mar 17 17:46:10.384261 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:46:10.385332 containerd[1502]: time="2025-03-17T17:46:10.384934080Z" level=info msg="containerd successfully booted in 0.099350s" Mar 17 17:46:10.535222 tar[1488]: linux-arm64/LICENSE Mar 17 17:46:10.535475 tar[1488]: linux-arm64/README.md Mar 17 17:46:10.551407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:46:10.916537 systemd-networkd[1397]: eth1: Gained IPv6LL Mar 17 17:46:10.917926 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 17 17:46:10.918596 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 17 17:46:10.919242 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 17 17:46:10.924313 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:46:10.925662 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:46:10.933665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:10.937637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:46:10.969720 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:46:11.610536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:11.621181 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:11.734337 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:46:11.763446 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:46:11.771777 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:46:11.781301 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:46:11.782468 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:46:11.791659 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:46:11.801795 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:46:11.810206 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:46:11.812736 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:46:11.815750 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:46:11.816334 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:46:11.817186 systemd[1]: Startup finished in 779ms (kernel) + 6.544s (initrd) + 4.615s (userspace) = 11.939s. Mar 17 17:46:12.198119 kubelet[1576]: E0317 17:46:12.198044 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:12.203162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:12.203485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:12.204423 systemd[1]: kubelet.service: Consumed 836ms CPU time, 241M memory peak. Mar 17 17:46:22.454634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:46:22.468318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:22.569300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:22.583296 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:22.642668 kubelet[1614]: E0317 17:46:22.642605 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:22.645899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:22.646332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:22.646677 systemd[1]: kubelet.service: Consumed 156ms CPU time, 95.8M memory peak. Mar 17 17:46:32.897041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:46:32.906748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:33.016643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:33.016732 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:33.061017 kubelet[1630]: E0317 17:46:33.060936 1630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:33.063653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:33.063898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:33.064514 systemd[1]: kubelet.service: Consumed 139ms CPU time, 96.3M memory peak. Mar 17 17:46:40.994666 systemd-timesyncd[1400]: Contacted time server 158.220.97.17:123 (2.flatcar.pool.ntp.org). Mar 17 17:46:40.994784 systemd-timesyncd[1400]: Initial clock synchronization to Mon 2025-03-17 17:46:40.686077 UTC. Mar 17 17:46:43.224583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:46:43.229579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:43.329816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:43.334421 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:43.383680 kubelet[1646]: E0317 17:46:43.383555 1646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:43.386224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:43.386439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:43.387201 systemd[1]: kubelet.service: Consumed 145ms CPU time, 95.1M memory peak. Mar 17 17:46:53.474945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:46:53.491733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:53.599845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:53.604274 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:53.661637 kubelet[1662]: E0317 17:46:53.661563 1662 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:53.664716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:53.664874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:53.665343 systemd[1]: kubelet.service: Consumed 152ms CPU time, 97.2M memory peak. Mar 17 17:46:54.916892 update_engine[1482]: I20250317 17:46:54.916749 1482 update_attempter.cc:509] Updating boot flags... Mar 17 17:46:54.965456 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1679) Mar 17 17:46:55.023378 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1681) Mar 17 17:46:55.076779 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1681) Mar 17 17:47:03.725190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:47:03.736718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:47:03.853840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:47:03.858603 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:47:03.903555 kubelet[1699]: E0317 17:47:03.903474 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:47:03.907413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:47:03.907685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:47:03.908327 systemd[1]: kubelet.service: Consumed 145ms CPU time, 96.4M memory peak. Mar 17 17:47:13.974840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:47:13.980696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:47:14.084129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:47:14.091900 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:47:14.152408 kubelet[1715]: E0317 17:47:14.152277 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:47:14.157480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:47:14.157769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:47:14.158665 systemd[1]: kubelet.service: Consumed 162ms CPU time, 96.1M memory peak. Mar 17 17:47:24.225702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:47:24.234673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:47:24.345707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:47:24.350270 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:47:24.396058 kubelet[1730]: E0317 17:47:24.395988 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:47:24.398073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:47:24.398215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:47:24.398662 systemd[1]: kubelet.service: Consumed 143ms CPU time, 96.4M memory peak. Mar 17 17:47:34.475032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:47:34.484740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:47:34.591570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:47:34.602979 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:47:34.653017 kubelet[1747]: E0317 17:47:34.652960 1747 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:47:34.656015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:47:34.656244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:47:34.657092 systemd[1]: kubelet.service: Consumed 149ms CPU time, 92.5M memory peak. Mar 17 17:47:44.725065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 17:47:44.740640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:47:44.865728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:47:44.866209 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:47:44.908777 kubelet[1762]: E0317 17:47:44.908720 1762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:47:44.911152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:47:44.911335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:47:44.911681 systemd[1]: kubelet.service: Consumed 137ms CPU time, 94.8M memory peak. Mar 17 17:47:54.974858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 17:47:54.982695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:47:55.107650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:47:55.107731 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:47:55.155285 kubelet[1778]: E0317 17:47:55.155222 1778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:47:55.157947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:47:55.158093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:47:55.158595 systemd[1]: kubelet.service: Consumed 144ms CPU time, 94.3M memory peak. Mar 17 17:47:59.401718 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:47:59.414304 systemd[1]: Started sshd@0-138.199.148.212:22-139.178.89.65:44230.service - OpenSSH per-connection server daemon (139.178.89.65:44230). Mar 17 17:48:00.412906 sshd[1787]: Accepted publickey for core from 139.178.89.65 port 44230 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:00.416285 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:00.425433 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:48:00.433147 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:48:00.443883 systemd-logind[1479]: New session 1 of user core. Mar 17 17:48:00.448738 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:48:00.456899 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:48:00.461571 (systemd)[1791]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:48:00.464644 systemd-logind[1479]: New session c1 of user core. Mar 17 17:48:00.592742 systemd[1791]: Queued start job for default target default.target. Mar 17 17:48:00.601537 systemd[1791]: Created slice app.slice - User Application Slice. Mar 17 17:48:00.601600 systemd[1791]: Reached target paths.target - Paths. Mar 17 17:48:00.601674 systemd[1791]: Reached target timers.target - Timers. Mar 17 17:48:00.603630 systemd[1791]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:48:00.617258 systemd[1791]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:48:00.617476 systemd[1791]: Reached target sockets.target - Sockets. Mar 17 17:48:00.617529 systemd[1791]: Reached target basic.target - Basic System. Mar 17 17:48:00.617566 systemd[1791]: Reached target default.target - Main User Target. Mar 17 17:48:00.617596 systemd[1791]: Startup finished in 146ms. Mar 17 17:48:00.618457 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:48:00.630897 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:48:01.331214 systemd[1]: Started sshd@1-138.199.148.212:22-139.178.89.65:50126.service - OpenSSH per-connection server daemon (139.178.89.65:50126). Mar 17 17:48:02.334465 sshd[1803]: Accepted publickey for core from 139.178.89.65 port 50126 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:02.336400 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:02.343726 systemd-logind[1479]: New session 2 of user core. Mar 17 17:48:02.349582 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:48:03.014580 sshd[1805]: Connection closed by 139.178.89.65 port 50126 Mar 17 17:48:03.015420 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:03.020479 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:48:03.021508 systemd[1]: sshd@1-138.199.148.212:22-139.178.89.65:50126.service: Deactivated successfully. Mar 17 17:48:03.024009 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:48:03.025204 systemd-logind[1479]: Removed session 2. Mar 17 17:48:03.189843 systemd[1]: Started sshd@2-138.199.148.212:22-139.178.89.65:50130.service - OpenSSH per-connection server daemon (139.178.89.65:50130). Mar 17 17:48:04.164659 sshd[1811]: Accepted publickey for core from 139.178.89.65 port 50130 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:04.166306 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:04.172450 systemd-logind[1479]: New session 3 of user core. Mar 17 17:48:04.177667 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:48:04.834653 sshd[1813]: Connection closed by 139.178.89.65 port 50130 Mar 17 17:48:04.835422 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:04.840575 systemd[1]: sshd@2-138.199.148.212:22-139.178.89.65:50130.service: Deactivated successfully. Mar 17 17:48:04.846037 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:48:04.847557 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:48:04.848584 systemd-logind[1479]: Removed session 3. Mar 17 17:48:05.011975 systemd[1]: Started sshd@3-138.199.148.212:22-139.178.89.65:50144.service - OpenSSH per-connection server daemon (139.178.89.65:50144). Mar 17 17:48:05.224534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 17:48:05.232599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:05.349533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:05.354598 (kubelet)[1828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:48:05.398127 kubelet[1828]: E0317 17:48:05.398009 1828 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:48:05.400908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:48:05.401092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:48:05.401652 systemd[1]: kubelet.service: Consumed 142ms CPU time, 96.1M memory peak. Mar 17 17:48:05.999374 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 50144 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:06.001191 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:06.006198 systemd-logind[1479]: New session 4 of user core. Mar 17 17:48:06.013690 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:48:06.683589 sshd[1837]: Connection closed by 139.178.89.65 port 50144 Mar 17 17:48:06.684520 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:06.689721 systemd[1]: sshd@3-138.199.148.212:22-139.178.89.65:50144.service: Deactivated successfully. Mar 17 17:48:06.693132 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:48:06.694190 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:48:06.695389 systemd-logind[1479]: Removed session 4. Mar 17 17:48:06.862832 systemd[1]: Started sshd@4-138.199.148.212:22-139.178.89.65:50148.service - OpenSSH per-connection server daemon (139.178.89.65:50148). Mar 17 17:48:07.844771 sshd[1843]: Accepted publickey for core from 139.178.89.65 port 50148 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:07.846827 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:07.852397 systemd-logind[1479]: New session 5 of user core. Mar 17 17:48:07.865693 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:48:08.374858 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:48:08.375144 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:48:08.394720 sudo[1846]: pam_unix(sudo:session): session closed for user root Mar 17 17:48:08.554186 sshd[1845]: Connection closed by 139.178.89.65 port 50148 Mar 17 17:48:08.555263 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:08.561197 systemd[1]: sshd@4-138.199.148.212:22-139.178.89.65:50148.service: Deactivated successfully. Mar 17 17:48:08.563934 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:48:08.566104 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:48:08.567344 systemd-logind[1479]: Removed session 5. Mar 17 17:48:08.734621 systemd[1]: Started sshd@5-138.199.148.212:22-139.178.89.65:50156.service - OpenSSH per-connection server daemon (139.178.89.65:50156). Mar 17 17:48:09.720673 sshd[1852]: Accepted publickey for core from 139.178.89.65 port 50156 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:09.723276 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:09.730420 systemd-logind[1479]: New session 6 of user core. Mar 17 17:48:09.736734 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:48:10.244824 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:48:10.245104 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:48:10.250815 sudo[1856]: pam_unix(sudo:session): session closed for user root Mar 17 17:48:10.256284 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:48:10.256617 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:48:10.272224 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:48:10.301557 augenrules[1878]: No rules Mar 17 17:48:10.302853 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:48:10.303209 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:48:10.304595 sudo[1855]: pam_unix(sudo:session): session closed for user root Mar 17 17:48:10.463699 sshd[1854]: Connection closed by 139.178.89.65 port 50156 Mar 17 17:48:10.464649 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:10.469654 systemd[1]: sshd@5-138.199.148.212:22-139.178.89.65:50156.service: Deactivated successfully. Mar 17 17:48:10.471761 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:48:10.473746 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:48:10.474984 systemd-logind[1479]: Removed session 6. Mar 17 17:48:10.643950 systemd[1]: Started sshd@6-138.199.148.212:22-139.178.89.65:50164.service - OpenSSH per-connection server daemon (139.178.89.65:50164). Mar 17 17:48:11.635163 sshd[1887]: Accepted publickey for core from 139.178.89.65 port 50164 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:11.636989 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:11.646565 systemd-logind[1479]: New session 7 of user core. Mar 17 17:48:11.649530 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:48:12.163292 sudo[1890]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:48:12.163596 sudo[1890]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:48:12.509855 (dockerd)[1908]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:48:12.509881 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:48:12.732079 dockerd[1908]: time="2025-03-17T17:48:12.731997595Z" level=info msg="Starting up" Mar 17 17:48:12.810896 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport416563705-merged.mount: Deactivated successfully. Mar 17 17:48:12.824640 systemd[1]: var-lib-docker-metacopy\x2dcheck3833444437-merged.mount: Deactivated successfully. Mar 17 17:48:12.835014 dockerd[1908]: time="2025-03-17T17:48:12.834718297Z" level=info msg="Loading containers: start." Mar 17 17:48:13.000518 kernel: Initializing XFRM netlink socket Mar 17 17:48:13.090935 systemd-networkd[1397]: docker0: Link UP Mar 17 17:48:13.126254 dockerd[1908]: time="2025-03-17T17:48:13.126015029Z" level=info msg="Loading containers: done." Mar 17 17:48:13.145443 dockerd[1908]: time="2025-03-17T17:48:13.144877869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:48:13.145443 dockerd[1908]: time="2025-03-17T17:48:13.145068158Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:48:13.145443 dockerd[1908]: time="2025-03-17T17:48:13.145404253Z" level=info msg="Daemon has completed initialization" Mar 17 17:48:13.181561 dockerd[1908]: time="2025-03-17T17:48:13.181470739Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:48:13.182387 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:48:14.277042 containerd[1502]: time="2025-03-17T17:48:14.276983987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:48:14.884283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763303173.mount: Deactivated successfully. Mar 17 17:48:15.474535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 17:48:15.483921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:15.605564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:15.609732 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:48:15.651837 kubelet[2163]: E0317 17:48:15.651651 2163 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:48:15.655864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:48:15.656234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:48:15.656638 systemd[1]: kubelet.service: Consumed 135ms CPU time, 93.2M memory peak. Mar 17 17:48:15.891256 containerd[1502]: time="2025-03-17T17:48:15.891127387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:15.893234 containerd[1502]: time="2025-03-17T17:48:15.892577730Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793616" Mar 17 17:48:15.894578 containerd[1502]: time="2025-03-17T17:48:15.894534416Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:15.902749 containerd[1502]: time="2025-03-17T17:48:15.902694653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:15.904047 containerd[1502]: time="2025-03-17T17:48:15.904005790Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 1.62697632s" Mar 17 17:48:15.904166 containerd[1502]: time="2025-03-17T17:48:15.904148796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:48:15.924295 containerd[1502]: time="2025-03-17T17:48:15.924261396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:48:17.214374 containerd[1502]: time="2025-03-17T17:48:17.214179771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:17.216197 containerd[1502]: time="2025-03-17T17:48:17.216147496Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861187" Mar 17 17:48:17.216305 containerd[1502]: time="2025-03-17T17:48:17.216232539Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:17.221501 containerd[1502]: time="2025-03-17T17:48:17.221391961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:17.223609 containerd[1502]: time="2025-03-17T17:48:17.223436529Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.298936363s" Mar 17 17:48:17.223609 containerd[1502]: time="2025-03-17T17:48:17.223489812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:48:17.251766 containerd[1502]: time="2025-03-17T17:48:17.251509657Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:48:18.174527 containerd[1502]: time="2025-03-17T17:48:18.173435027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:18.175658 containerd[1502]: time="2025-03-17T17:48:18.175605324Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264656" Mar 17 17:48:18.176665 containerd[1502]: time="2025-03-17T17:48:18.176608116Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:18.179741 containerd[1502]: time="2025-03-17T17:48:18.179706929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:18.181051 containerd[1502]: time="2025-03-17T17:48:18.181008667Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 929.459609ms" Mar 17 17:48:18.181051 containerd[1502]: time="2025-03-17T17:48:18.181042865Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:48:18.206119 containerd[1502]: time="2025-03-17T17:48:18.206078435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:48:19.169027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597896272.mount: Deactivated successfully. Mar 17 17:48:19.490497 containerd[1502]: time="2025-03-17T17:48:19.490443697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:19.504620 containerd[1502]: time="2025-03-17T17:48:19.504466063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771874" Mar 17 17:48:19.506653 containerd[1502]: time="2025-03-17T17:48:19.506560489Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:19.510533 containerd[1502]: time="2025-03-17T17:48:19.510301400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:19.511444 containerd[1502]: time="2025-03-17T17:48:19.511400430Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.305142524s" Mar 17 17:48:19.511509 containerd[1502]: time="2025-03-17T17:48:19.511445668Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:48:19.534197 containerd[1502]: time="2025-03-17T17:48:19.534158122Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:48:20.141269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596909471.mount: Deactivated successfully. Mar 17 17:48:20.751400 containerd[1502]: time="2025-03-17T17:48:20.749200432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:20.751400 containerd[1502]: time="2025-03-17T17:48:20.751244305Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Mar 17 17:48:20.752010 containerd[1502]: time="2025-03-17T17:48:20.751971113Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:20.755936 containerd[1502]: time="2025-03-17T17:48:20.755881306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:20.758381 containerd[1502]: time="2025-03-17T17:48:20.758311682Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.224106561s" Mar 17 17:48:20.758583 containerd[1502]: time="2025-03-17T17:48:20.758554191Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:48:20.786544 containerd[1502]: time="2025-03-17T17:48:20.786491193Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:48:21.334899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935513187.mount: Deactivated successfully. Mar 17 17:48:21.341269 containerd[1502]: time="2025-03-17T17:48:21.340295246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:21.342536 containerd[1502]: time="2025-03-17T17:48:21.342442159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Mar 17 17:48:21.344158 containerd[1502]: time="2025-03-17T17:48:21.344062653Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:21.348453 containerd[1502]: time="2025-03-17T17:48:21.348325519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:21.349577 containerd[1502]: time="2025-03-17T17:48:21.349147646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 562.610814ms" Mar 17 17:48:21.349577 containerd[1502]: time="2025-03-17T17:48:21.349184685Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:48:21.372509 containerd[1502]: time="2025-03-17T17:48:21.372469138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:48:21.955974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363294898.mount: Deactivated successfully. Mar 17 17:48:23.338054 containerd[1502]: time="2025-03-17T17:48:23.337856399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:23.339900 containerd[1502]: time="2025-03-17T17:48:23.339846487Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Mar 17 17:48:23.341145 containerd[1502]: time="2025-03-17T17:48:23.341038164Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:23.345574 containerd[1502]: time="2025-03-17T17:48:23.345534640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:23.348466 containerd[1502]: time="2025-03-17T17:48:23.348277340Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.975763363s" Mar 17 17:48:23.348466 containerd[1502]: time="2025-03-17T17:48:23.348324698Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:48:25.725978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 17 17:48:25.734588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:25.847528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:25.853655 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:48:25.899369 kubelet[2372]: E0317 17:48:25.897953 2372 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:48:25.900677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:48:25.900826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:48:25.903473 systemd[1]: kubelet.service: Consumed 134ms CPU time, 92.3M memory peak. Mar 17 17:48:28.070824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:28.071738 systemd[1]: kubelet.service: Consumed 134ms CPU time, 92.3M memory peak. Mar 17 17:48:28.085916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:28.114555 systemd[1]: Reload requested from client PID 2387 ('systemctl') (unit session-7.scope)... Mar 17 17:48:28.114711 systemd[1]: Reloading... Mar 17 17:48:28.238130 zram_generator::config[2430]: No configuration found. Mar 17 17:48:28.345252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:48:28.434265 systemd[1]: Reloading finished in 319 ms. Mar 17 17:48:28.484061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:28.489784 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:28.493406 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:48:28.493638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:28.493688 systemd[1]: kubelet.service: Consumed 94ms CPU time, 82.3M memory peak. Mar 17 17:48:28.501000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:28.612724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:28.614878 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:48:28.662175 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:48:28.662561 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:48:28.662657 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:48:28.662847 kubelet[2482]: I0317 17:48:28.662817 2482 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:48:29.308524 kubelet[2482]: I0317 17:48:29.308483 2482 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:48:29.310410 kubelet[2482]: I0317 17:48:29.308758 2482 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:48:29.310410 kubelet[2482]: I0317 17:48:29.309012 2482 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:48:29.331221 kubelet[2482]: E0317 17:48:29.331152 2482 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.148.212:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.331613 kubelet[2482]: I0317 17:48:29.331495 2482 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:48:29.342182 kubelet[2482]: I0317 17:48:29.342127 2482 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:48:29.343898 kubelet[2482]: I0317 17:48:29.343856 2482 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:48:29.344423 kubelet[2482]: I0317 17:48:29.344027 2482 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-b-a06069b96b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:48:29.344423 kubelet[2482]: I0317 17:48:29.344314 2482 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:48:29.344423 kubelet[2482]: I0317 17:48:29.344323 2482 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:48:29.345120 kubelet[2482]: I0317 17:48:29.344850 2482 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:48:29.346550 kubelet[2482]: I0317 17:48:29.346533 2482 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:48:29.346650 kubelet[2482]: I0317 17:48:29.346639 2482 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:48:29.346839 kubelet[2482]: W0317 17:48:29.346748 2482 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-b-a06069b96b&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.346839 kubelet[2482]: E0317 17:48:29.346831 2482 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-b-a06069b96b&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.347014 kubelet[2482]: I0317 17:48:29.346996 2482 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:48:29.348443 kubelet[2482]: I0317 17:48:29.347170 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:48:29.349121 kubelet[2482]: I0317 17:48:29.349077 2482 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:48:29.349652 kubelet[2482]: I0317 17:48:29.349622 2482 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:48:29.349781 kubelet[2482]: W0317 17:48:29.349760 2482 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:48:29.351037 kubelet[2482]: I0317 17:48:29.351003 2482 server.go:1264] "Started kubelet" Mar 17 17:48:29.351268 kubelet[2482]: W0317 17:48:29.351203 2482 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.351315 kubelet[2482]: E0317 17:48:29.351295 2482 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.354929 kubelet[2482]: E0317 17:48:29.354553 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.148.212:6443/api/v1/namespaces/default/events\": dial tcp 138.199.148.212:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-0-b-a06069b96b.182da853f19357a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-0-b-a06069b96b,UID:ci-4230-1-0-b-a06069b96b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-b-a06069b96b,},FirstTimestamp:2025-03-17 17:48:29.350967204 +0000 UTC m=+0.731864721,LastTimestamp:2025-03-17 17:48:29.350967204 +0000 UTC m=+0.731864721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-b-a06069b96b,}" Mar 17 17:48:29.355871 kubelet[2482]: I0317 17:48:29.355824 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:48:29.356597 kubelet[2482]: I0317 17:48:29.356582 2482 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:48:29.356694 kubelet[2482]: I0317 17:48:29.356167 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:48:29.359644 kubelet[2482]: I0317 17:48:29.359565 2482 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:48:29.360745 kubelet[2482]: I0317 17:48:29.360579 2482 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:48:29.361682 kubelet[2482]: I0317 17:48:29.361647 2482 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:48:29.361792 kubelet[2482]: I0317 17:48:29.361774 2482 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:48:29.363009 kubelet[2482]: I0317 17:48:29.362974 2482 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:48:29.363494 kubelet[2482]: W0317 17:48:29.363407 2482 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.148.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.363494 kubelet[2482]: E0317 17:48:29.363465 2482 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.148.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.364272 kubelet[2482]: E0317 17:48:29.363934 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-b-a06069b96b?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="200ms" Mar 17 17:48:29.364704 kubelet[2482]: I0317 17:48:29.364670 2482 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:48:29.364954 kubelet[2482]: I0317 17:48:29.364763 2482 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:48:29.366425 kubelet[2482]: E0317 17:48:29.366229 2482 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:48:29.366575 kubelet[2482]: I0317 17:48:29.366546 2482 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:48:29.378114 kubelet[2482]: I0317 17:48:29.378034 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:48:29.379195 kubelet[2482]: I0317 17:48:29.379156 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:48:29.379341 kubelet[2482]: I0317 17:48:29.379328 2482 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:48:29.379404 kubelet[2482]: I0317 17:48:29.379394 2482 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:48:29.379471 kubelet[2482]: E0317 17:48:29.379447 2482 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:48:29.386606 kubelet[2482]: W0317 17:48:29.386284 2482 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.386606 kubelet[2482]: E0317 17:48:29.386416 2482 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:29.397216 kubelet[2482]: I0317 17:48:29.396813 2482 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:48:29.397216 kubelet[2482]: I0317 17:48:29.396837 2482 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:48:29.397216 kubelet[2482]: I0317 17:48:29.396857 2482 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:48:29.399712 kubelet[2482]: I0317 17:48:29.399584 2482 policy_none.go:49] "None policy: Start" Mar 17 17:48:29.400753 kubelet[2482]: I0317 17:48:29.400335 2482 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:48:29.400753 kubelet[2482]: I0317 17:48:29.400385 2482 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:48:29.408243 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:48:29.427867 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:48:29.431947 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:48:29.441419 kubelet[2482]: I0317 17:48:29.441046 2482 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:48:29.441587 kubelet[2482]: I0317 17:48:29.441442 2482 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:48:29.441753 kubelet[2482]: I0317 17:48:29.441635 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:48:29.444213 kubelet[2482]: E0317 17:48:29.444163 2482 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-0-b-a06069b96b\" not found" Mar 17 17:48:29.463157 kubelet[2482]: I0317 17:48:29.462660 2482 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.463452 kubelet[2482]: E0317 17:48:29.463196 2482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.212:6443/api/v1/nodes\": dial tcp 138.199.148.212:6443: connect: connection refused" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.480256 kubelet[2482]: I0317 17:48:29.479776 2482 topology_manager.go:215] "Topology Admit Handler" podUID="fae5a8f1a84b45a4b3e26871bce7df43" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.482606 kubelet[2482]: I0317 17:48:29.482545 2482 topology_manager.go:215] "Topology Admit Handler" podUID="ad509f0a57347c3b0d043adccd89f6d4" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.486859 kubelet[2482]: I0317 17:48:29.486462 2482 topology_manager.go:215] "Topology Admit Handler" podUID="8ffaf16a955bcb00edf917b6ce81e5d8" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.497865 systemd[1]: Created slice kubepods-burstable-podfae5a8f1a84b45a4b3e26871bce7df43.slice - libcontainer container kubepods-burstable-podfae5a8f1a84b45a4b3e26871bce7df43.slice. Mar 17 17:48:29.511296 systemd[1]: Created slice kubepods-burstable-podad509f0a57347c3b0d043adccd89f6d4.slice - libcontainer container kubepods-burstable-podad509f0a57347c3b0d043adccd89f6d4.slice. Mar 17 17:48:29.527992 systemd[1]: Created slice kubepods-burstable-pod8ffaf16a955bcb00edf917b6ce81e5d8.slice - libcontainer container kubepods-burstable-pod8ffaf16a955bcb00edf917b6ce81e5d8.slice. Mar 17 17:48:29.564468 kubelet[2482]: I0317 17:48:29.563547 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.564468 kubelet[2482]: I0317 17:48:29.563656 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.564468 kubelet[2482]: I0317 17:48:29.563714 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ffaf16a955bcb00edf917b6ce81e5d8-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-b-a06069b96b\" (UID: \"8ffaf16a955bcb00edf917b6ce81e5d8\") " pod="kube-system/kube-scheduler-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.564468 kubelet[2482]: I0317 17:48:29.563751 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.564468 kubelet[2482]: I0317 17:48:29.563788 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.565473 kubelet[2482]: I0317 17:48:29.563822 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.565473 kubelet[2482]: I0317 17:48:29.563855 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fae5a8f1a84b45a4b3e26871bce7df43-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-b-a06069b96b\" (UID: \"fae5a8f1a84b45a4b3e26871bce7df43\") " pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.565473 kubelet[2482]: I0317 17:48:29.563884 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fae5a8f1a84b45a4b3e26871bce7df43-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-b-a06069b96b\" (UID: \"fae5a8f1a84b45a4b3e26871bce7df43\") " pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.565473 kubelet[2482]: I0317 17:48:29.563917 2482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fae5a8f1a84b45a4b3e26871bce7df43-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-b-a06069b96b\" (UID: \"fae5a8f1a84b45a4b3e26871bce7df43\") " pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.565473 kubelet[2482]: E0317 17:48:29.564980 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-b-a06069b96b?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="400ms" Mar 17 17:48:29.665274 kubelet[2482]: I0317 17:48:29.665228 2482 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.665903 kubelet[2482]: E0317 17:48:29.665690 2482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.212:6443/api/v1/nodes\": dial tcp 138.199.148.212:6443: connect: connection refused" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:29.810968 containerd[1502]: time="2025-03-17T17:48:29.810905867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-b-a06069b96b,Uid:fae5a8f1a84b45a4b3e26871bce7df43,Namespace:kube-system,Attempt:0,}" Mar 17 17:48:29.823616 containerd[1502]: time="2025-03-17T17:48:29.823404433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-b-a06069b96b,Uid:ad509f0a57347c3b0d043adccd89f6d4,Namespace:kube-system,Attempt:0,}" Mar 17 17:48:29.832449 containerd[1502]: time="2025-03-17T17:48:29.832089814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-b-a06069b96b,Uid:8ffaf16a955bcb00edf917b6ce81e5d8,Namespace:kube-system,Attempt:0,}" Mar 17 17:48:29.842608 kubelet[2482]: E0317 17:48:29.842402 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.148.212:6443/api/v1/namespaces/default/events\": dial tcp 138.199.148.212:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-0-b-a06069b96b.182da853f19357a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-0-b-a06069b96b,UID:ci-4230-1-0-b-a06069b96b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-b-a06069b96b,},FirstTimestamp:2025-03-17 17:48:29.350967204 +0000 UTC m=+0.731864721,LastTimestamp:2025-03-17 17:48:29.350967204 +0000 UTC m=+0.731864721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-b-a06069b96b,}" Mar 17 17:48:29.966040 kubelet[2482]: E0317 17:48:29.965971 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-b-a06069b96b?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="800ms" Mar 17 17:48:30.068919 kubelet[2482]: I0317 17:48:30.068849 2482 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:30.069616 kubelet[2482]: E0317 17:48:30.069496 2482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.212:6443/api/v1/nodes\": dial tcp 138.199.148.212:6443: connect: connection refused" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:30.355103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895328127.mount: Deactivated successfully. Mar 17 17:48:30.364399 containerd[1502]: time="2025-03-17T17:48:30.364308979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:48:30.367075 containerd[1502]: time="2025-03-17T17:48:30.366931677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 17 17:48:30.368375 containerd[1502]: time="2025-03-17T17:48:30.368193008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:48:30.369405 containerd[1502]: time="2025-03-17T17:48:30.369350860Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:48:30.370670 containerd[1502]: time="2025-03-17T17:48:30.370554432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:48:30.371750 containerd[1502]: time="2025-03-17T17:48:30.371614967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:48:30.371750 containerd[1502]: time="2025-03-17T17:48:30.371712325Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:48:30.373998 containerd[1502]: time="2025-03-17T17:48:30.373962312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:48:30.376891 containerd[1502]: time="2025-03-17T17:48:30.376606010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.567066ms" Mar 17 17:48:30.378852 containerd[1502]: time="2025-03-17T17:48:30.378618723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.118252ms" Mar 17 17:48:30.382715 containerd[1502]: time="2025-03-17T17:48:30.382636428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.425457ms" Mar 17 17:48:30.400392 kubelet[2482]: W0317 17:48:30.399857 2482 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:30.400392 kubelet[2482]: E0317 17:48:30.399921 2482 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:30.412845 kubelet[2482]: W0317 17:48:30.412676 2482 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:30.412845 kubelet[2482]: E0317 17:48:30.412739 2482 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:30.488080 containerd[1502]: time="2025-03-17T17:48:30.487389607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:48:30.488080 containerd[1502]: time="2025-03-17T17:48:30.487702639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:48:30.488080 containerd[1502]: time="2025-03-17T17:48:30.487715319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:30.488813 containerd[1502]: time="2025-03-17T17:48:30.488674177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:30.495365 containerd[1502]: time="2025-03-17T17:48:30.494397082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:48:30.495365 containerd[1502]: time="2025-03-17T17:48:30.494469360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:48:30.495365 containerd[1502]: time="2025-03-17T17:48:30.494480480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:30.495365 containerd[1502]: time="2025-03-17T17:48:30.494568558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:30.501391 containerd[1502]: time="2025-03-17T17:48:30.500062549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:48:30.501391 containerd[1502]: time="2025-03-17T17:48:30.500115268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:48:30.501391 containerd[1502]: time="2025-03-17T17:48:30.500127507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:30.501391 containerd[1502]: time="2025-03-17T17:48:30.500191786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:30.518593 systemd[1]: Started cri-containerd-9dd80424bac3f6bf44657149bf4ff87058bf619bdee2a78527430d5d2780aaa1.scope - libcontainer container 9dd80424bac3f6bf44657149bf4ff87058bf619bdee2a78527430d5d2780aaa1. Mar 17 17:48:30.525916 systemd[1]: Started cri-containerd-f27232dce0076f52f6722ab0aa6be763e8bac2f3cbdb8aa13ae1481c1b42e2e5.scope - libcontainer container f27232dce0076f52f6722ab0aa6be763e8bac2f3cbdb8aa13ae1481c1b42e2e5. Mar 17 17:48:30.529850 kubelet[2482]: W0317 17:48:30.529764 2482 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-b-a06069b96b&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:30.529850 kubelet[2482]: E0317 17:48:30.529826 2482 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-b-a06069b96b&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:48:30.531137 systemd[1]: Started cri-containerd-4c6058a8a6400efe9c26228cf59a2dce8e9488a3c361a1bf8a492f1fa838ee84.scope - libcontainer container 4c6058a8a6400efe9c26228cf59a2dce8e9488a3c361a1bf8a492f1fa838ee84. Mar 17 17:48:30.576060 containerd[1502]: time="2025-03-17T17:48:30.576010444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-b-a06069b96b,Uid:8ffaf16a955bcb00edf917b6ce81e5d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dd80424bac3f6bf44657149bf4ff87058bf619bdee2a78527430d5d2780aaa1\"" Mar 17 17:48:30.584847 containerd[1502]: time="2025-03-17T17:48:30.584807918Z" level=info msg="CreateContainer within sandbox \"9dd80424bac3f6bf44657149bf4ff87058bf619bdee2a78527430d5d2780aaa1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:48:30.591893 containerd[1502]: time="2025-03-17T17:48:30.591848632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-b-a06069b96b,Uid:ad509f0a57347c3b0d043adccd89f6d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f27232dce0076f52f6722ab0aa6be763e8bac2f3cbdb8aa13ae1481c1b42e2e5\"" Mar 17 17:48:30.596082 containerd[1502]: time="2025-03-17T17:48:30.596030294Z" level=info msg="CreateContainer within sandbox \"f27232dce0076f52f6722ab0aa6be763e8bac2f3cbdb8aa13ae1481c1b42e2e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:48:30.599597 containerd[1502]: time="2025-03-17T17:48:30.599501212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-b-a06069b96b,Uid:fae5a8f1a84b45a4b3e26871bce7df43,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c6058a8a6400efe9c26228cf59a2dce8e9488a3c361a1bf8a492f1fa838ee84\"" Mar 17 17:48:30.603888 containerd[1502]: time="2025-03-17T17:48:30.603848230Z" level=info msg="CreateContainer within sandbox \"4c6058a8a6400efe9c26228cf59a2dce8e9488a3c361a1bf8a492f1fa838ee84\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:48:30.604195 containerd[1502]: time="2025-03-17T17:48:30.604112584Z" level=info msg="CreateContainer within sandbox \"9dd80424bac3f6bf44657149bf4ff87058bf619bdee2a78527430d5d2780aaa1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f\"" Mar 17 17:48:30.605454 containerd[1502]: time="2025-03-17T17:48:30.604752329Z" level=info msg="StartContainer for \"403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f\"" Mar 17 17:48:30.621161 containerd[1502]: time="2025-03-17T17:48:30.621092905Z" level=info msg="CreateContainer within sandbox \"4c6058a8a6400efe9c26228cf59a2dce8e9488a3c361a1bf8a492f1fa838ee84\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de5bf7ebd243d5e2ad9f08c3eb54c58dca93eeaff7196169c6427b61941b40f9\"" Mar 17 17:48:30.621664 containerd[1502]: time="2025-03-17T17:48:30.621635532Z" level=info msg="StartContainer for \"de5bf7ebd243d5e2ad9f08c3eb54c58dca93eeaff7196169c6427b61941b40f9\"" Mar 17 17:48:30.623184 containerd[1502]: time="2025-03-17T17:48:30.623155497Z" level=info msg="CreateContainer within sandbox \"f27232dce0076f52f6722ab0aa6be763e8bac2f3cbdb8aa13ae1481c1b42e2e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f\"" Mar 17 17:48:30.623745 containerd[1502]: time="2025-03-17T17:48:30.623718523Z" level=info msg="StartContainer for \"b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f\"" Mar 17 17:48:30.639827 systemd[1]: Started cri-containerd-403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f.scope - libcontainer container 403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f. Mar 17 17:48:30.665546 systemd[1]: Started cri-containerd-b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f.scope - libcontainer container b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f. Mar 17 17:48:30.666779 systemd[1]: Started cri-containerd-de5bf7ebd243d5e2ad9f08c3eb54c58dca93eeaff7196169c6427b61941b40f9.scope - libcontainer container de5bf7ebd243d5e2ad9f08c3eb54c58dca93eeaff7196169c6427b61941b40f9. Mar 17 17:48:30.702770 containerd[1502]: time="2025-03-17T17:48:30.702418834Z" level=info msg="StartContainer for \"403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f\" returns successfully" Mar 17 17:48:30.736494 containerd[1502]: time="2025-03-17T17:48:30.736454554Z" level=info msg="StartContainer for \"b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f\" returns successfully" Mar 17 17:48:30.737509 containerd[1502]: time="2025-03-17T17:48:30.736611311Z" level=info msg="StartContainer for \"de5bf7ebd243d5e2ad9f08c3eb54c58dca93eeaff7196169c6427b61941b40f9\" returns successfully" Mar 17 17:48:30.766917 kubelet[2482]: E0317 17:48:30.766795 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-b-a06069b96b?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="1.6s" Mar 17 17:48:30.872123 kubelet[2482]: I0317 17:48:30.871664 2482 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:30.872123 kubelet[2482]: E0317 17:48:30.871979 2482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.212:6443/api/v1/nodes\": dial tcp 138.199.148.212:6443: connect: connection refused" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:32.475632 kubelet[2482]: I0317 17:48:32.474861 2482 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:33.506317 kubelet[2482]: E0317 17:48:33.506266 2482 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-0-b-a06069b96b\" not found" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:33.541424 kubelet[2482]: I0317 17:48:33.539425 2482 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:34.353780 kubelet[2482]: I0317 17:48:34.353731 2482 apiserver.go:52] "Watching apiserver" Mar 17 17:48:34.362601 kubelet[2482]: I0317 17:48:34.362464 2482 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:48:35.565402 systemd[1]: Reload requested from client PID 2756 ('systemctl') (unit session-7.scope)... Mar 17 17:48:35.565417 systemd[1]: Reloading... Mar 17 17:48:35.686489 zram_generator::config[2807]: No configuration found. Mar 17 17:48:35.777750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:48:35.880985 systemd[1]: Reloading finished in 315 ms. Mar 17 17:48:35.902596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:35.903085 kubelet[2482]: I0317 17:48:35.902793 2482 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:48:35.919857 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:48:35.920219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:35.920292 systemd[1]: kubelet.service: Consumed 1.128s CPU time, 113.4M memory peak. Mar 17 17:48:35.928066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:48:36.058653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:48:36.065164 (kubelet)[2846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:48:36.118117 kubelet[2846]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:48:36.118117 kubelet[2846]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:48:36.118117 kubelet[2846]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:48:36.118956 kubelet[2846]: I0317 17:48:36.118227 2846 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:48:36.124464 kubelet[2846]: I0317 17:48:36.124423 2846 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:48:36.124464 kubelet[2846]: I0317 17:48:36.124450 2846 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:48:36.124723 kubelet[2846]: I0317 17:48:36.124708 2846 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:48:36.126325 kubelet[2846]: I0317 17:48:36.126295 2846 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:48:36.128050 kubelet[2846]: I0317 17:48:36.127864 2846 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:48:36.136676 kubelet[2846]: I0317 17:48:36.136577 2846 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:48:36.137123 kubelet[2846]: I0317 17:48:36.136775 2846 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:48:36.137123 kubelet[2846]: I0317 17:48:36.136808 2846 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-b-a06069b96b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:48:36.137123 kubelet[2846]: I0317 17:48:36.136994 2846 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:48:36.137123 kubelet[2846]: I0317 17:48:36.137002 2846 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:48:36.138235 kubelet[2846]: I0317 17:48:36.137037 2846 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:48:36.138235 kubelet[2846]: I0317 17:48:36.137192 2846 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:48:36.138235 kubelet[2846]: I0317 17:48:36.137208 2846 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:48:36.138235 kubelet[2846]: I0317 17:48:36.137236 2846 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:48:36.138235 kubelet[2846]: I0317 17:48:36.137252 2846 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:48:36.138755 kubelet[2846]: I0317 17:48:36.138736 2846 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:48:36.138995 kubelet[2846]: I0317 17:48:36.138981 2846 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:48:36.139533 kubelet[2846]: I0317 17:48:36.139518 2846 server.go:1264] "Started kubelet" Mar 17 17:48:36.141943 kubelet[2846]: I0317 17:48:36.141921 2846 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:48:36.149806 kubelet[2846]: I0317 17:48:36.149758 2846 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:48:36.152489 kubelet[2846]: I0317 17:48:36.152456 2846 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:48:36.155400 kubelet[2846]: I0317 17:48:36.153449 2846 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:48:36.155400 kubelet[2846]: I0317 17:48:36.153663 2846 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:48:36.156622 kubelet[2846]: I0317 17:48:36.156605 2846 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:48:36.159491 kubelet[2846]: I0317 17:48:36.159465 2846 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:48:36.160384 kubelet[2846]: I0317 17:48:36.159738 2846 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:48:36.176557 kubelet[2846]: I0317 17:48:36.176507 2846 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:48:36.179693 kubelet[2846]: I0317 17:48:36.179343 2846 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:48:36.179978 kubelet[2846]: I0317 17:48:36.179938 2846 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:48:36.180223 kubelet[2846]: I0317 17:48:36.180128 2846 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:48:36.180500 kubelet[2846]: E0317 17:48:36.180322 2846 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:48:36.183452 kubelet[2846]: I0317 17:48:36.182689 2846 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:48:36.183452 kubelet[2846]: I0317 17:48:36.182804 2846 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:48:36.194371 kubelet[2846]: E0317 17:48:36.192694 2846 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:48:36.198678 kubelet[2846]: I0317 17:48:36.197887 2846 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:48:36.251901 kubelet[2846]: I0317 17:48:36.251861 2846 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:48:36.251901 kubelet[2846]: I0317 17:48:36.251895 2846 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:48:36.252082 kubelet[2846]: I0317 17:48:36.251920 2846 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:48:36.252135 kubelet[2846]: I0317 17:48:36.252097 2846 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:48:36.252135 kubelet[2846]: I0317 17:48:36.252110 2846 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:48:36.252135 kubelet[2846]: I0317 17:48:36.252130 2846 policy_none.go:49] "None policy: Start" Mar 17 17:48:36.253068 kubelet[2846]: I0317 17:48:36.253051 2846 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:48:36.253123 kubelet[2846]: I0317 17:48:36.253080 2846 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:48:36.253261 kubelet[2846]: I0317 17:48:36.253243 2846 state_mem.go:75] "Updated machine memory state" Mar 17 17:48:36.258614 kubelet[2846]: I0317 17:48:36.257749 2846 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:48:36.258614 kubelet[2846]: I0317 17:48:36.257918 2846 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:48:36.258614 kubelet[2846]: I0317 17:48:36.258011 2846 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:48:36.264467 kubelet[2846]: I0317 17:48:36.263905 2846 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.277293 kubelet[2846]: I0317 17:48:36.277250 2846 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.277436 kubelet[2846]: I0317 17:48:36.277335 2846 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.281201 kubelet[2846]: I0317 17:48:36.281055 2846 topology_manager.go:215] "Topology Admit Handler" podUID="fae5a8f1a84b45a4b3e26871bce7df43" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.281476 kubelet[2846]: I0317 17:48:36.281413 2846 topology_manager.go:215] "Topology Admit Handler" podUID="ad509f0a57347c3b0d043adccd89f6d4" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.281589 kubelet[2846]: I0317 17:48:36.281573 2846 topology_manager.go:215] "Topology Admit Handler" podUID="8ffaf16a955bcb00edf917b6ce81e5d8" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.297365 kubelet[2846]: E0317 17:48:36.297298 2846 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230-1-0-b-a06069b96b\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.297499 kubelet[2846]: E0317 17:48:36.297431 2846 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363313 kubelet[2846]: I0317 17:48:36.362805 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363313 kubelet[2846]: I0317 17:48:36.362938 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363313 kubelet[2846]: I0317 17:48:36.362986 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fae5a8f1a84b45a4b3e26871bce7df43-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-b-a06069b96b\" (UID: \"fae5a8f1a84b45a4b3e26871bce7df43\") " pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363313 kubelet[2846]: I0317 17:48:36.363023 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363313 kubelet[2846]: I0317 17:48:36.363057 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363736 kubelet[2846]: I0317 17:48:36.363092 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad509f0a57347c3b0d043adccd89f6d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-b-a06069b96b\" (UID: \"ad509f0a57347c3b0d043adccd89f6d4\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363736 kubelet[2846]: I0317 17:48:36.363128 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ffaf16a955bcb00edf917b6ce81e5d8-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-b-a06069b96b\" (UID: \"8ffaf16a955bcb00edf917b6ce81e5d8\") " pod="kube-system/kube-scheduler-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363736 kubelet[2846]: I0317 17:48:36.363161 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fae5a8f1a84b45a4b3e26871bce7df43-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-b-a06069b96b\" (UID: \"fae5a8f1a84b45a4b3e26871bce7df43\") " pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:36.363736 kubelet[2846]: I0317 17:48:36.363210 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fae5a8f1a84b45a4b3e26871bce7df43-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-b-a06069b96b\" (UID: \"fae5a8f1a84b45a4b3e26871bce7df43\") " pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:37.138658 kubelet[2846]: I0317 17:48:37.138326 2846 apiserver.go:52] "Watching apiserver" Mar 17 17:48:37.160777 kubelet[2846]: I0317 17:48:37.160724 2846 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:48:37.276404 kubelet[2846]: E0317 17:48:37.276350 2846 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-0-b-a06069b96b\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" Mar 17 17:48:37.321992 kubelet[2846]: I0317 17:48:37.321922 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-0-b-a06069b96b" podStartSLOduration=1.321879674 podStartE2EDuration="1.321879674s" podCreationTimestamp="2025-03-17 17:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:48:37.310249507 +0000 UTC m=+1.240526221" watchObservedRunningTime="2025-03-17 17:48:37.321879674 +0000 UTC m=+1.252156388" Mar 17 17:48:37.360720 kubelet[2846]: I0317 17:48:37.360595 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-0-b-a06069b96b" podStartSLOduration=3.360574804 podStartE2EDuration="3.360574804s" podCreationTimestamp="2025-03-17 17:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:48:37.339543001 +0000 UTC m=+1.269819715" watchObservedRunningTime="2025-03-17 17:48:37.360574804 +0000 UTC m=+1.290851518" Mar 17 17:48:37.382915 kubelet[2846]: I0317 17:48:37.382797 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-0-b-a06069b96b" podStartSLOduration=2.382777952 podStartE2EDuration="2.382777952s" podCreationTimestamp="2025-03-17 17:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:48:37.362789895 +0000 UTC m=+1.293066609" watchObservedRunningTime="2025-03-17 17:48:37.382777952 +0000 UTC m=+1.313054666" Mar 17 17:48:41.361370 sudo[1890]: pam_unix(sudo:session): session closed for user root Mar 17 17:48:41.521962 sshd[1889]: Connection closed by 139.178.89.65 port 50164 Mar 17 17:48:41.522767 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:41.526995 systemd[1]: sshd@6-138.199.148.212:22-139.178.89.65:50164.service: Deactivated successfully. Mar 17 17:48:41.529919 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:48:41.530307 systemd[1]: session-7.scope: Consumed 6.480s CPU time, 254.9M memory peak. Mar 17 17:48:41.532584 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:48:41.533723 systemd-logind[1479]: Removed session 7. Mar 17 17:48:50.964757 kubelet[2846]: I0317 17:48:50.964695 2846 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:48:50.966209 containerd[1502]: time="2025-03-17T17:48:50.965480282Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:48:50.967510 kubelet[2846]: I0317 17:48:50.965664 2846 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:48:51.604863 kubelet[2846]: I0317 17:48:51.604812 2846 topology_manager.go:215] "Topology Admit Handler" podUID="dead0cd9-9902-48c1-9531-0dd4909b2da5" podNamespace="kube-system" podName="kube-proxy-jpjtv" Mar 17 17:48:51.619076 systemd[1]: Created slice kubepods-besteffort-poddead0cd9_9902_48c1_9531_0dd4909b2da5.slice - libcontainer container kubepods-besteffort-poddead0cd9_9902_48c1_9531_0dd4909b2da5.slice. Mar 17 17:48:51.666070 kubelet[2846]: I0317 17:48:51.666017 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dead0cd9-9902-48c1-9531-0dd4909b2da5-kube-proxy\") pod \"kube-proxy-jpjtv\" (UID: \"dead0cd9-9902-48c1-9531-0dd4909b2da5\") " pod="kube-system/kube-proxy-jpjtv" Mar 17 17:48:51.666220 kubelet[2846]: I0317 17:48:51.666081 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dead0cd9-9902-48c1-9531-0dd4909b2da5-lib-modules\") pod \"kube-proxy-jpjtv\" (UID: \"dead0cd9-9902-48c1-9531-0dd4909b2da5\") " pod="kube-system/kube-proxy-jpjtv" Mar 17 17:48:51.666220 kubelet[2846]: I0317 17:48:51.666128 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws74w\" (UniqueName: \"kubernetes.io/projected/dead0cd9-9902-48c1-9531-0dd4909b2da5-kube-api-access-ws74w\") pod \"kube-proxy-jpjtv\" (UID: \"dead0cd9-9902-48c1-9531-0dd4909b2da5\") " pod="kube-system/kube-proxy-jpjtv" Mar 17 17:48:51.666220 kubelet[2846]: I0317 17:48:51.666161 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dead0cd9-9902-48c1-9531-0dd4909b2da5-xtables-lock\") pod \"kube-proxy-jpjtv\" (UID: \"dead0cd9-9902-48c1-9531-0dd4909b2da5\") " pod="kube-system/kube-proxy-jpjtv" Mar 17 17:48:51.929926 containerd[1502]: time="2025-03-17T17:48:51.929431321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpjtv,Uid:dead0cd9-9902-48c1-9531-0dd4909b2da5,Namespace:kube-system,Attempt:0,}" Mar 17 17:48:51.956860 containerd[1502]: time="2025-03-17T17:48:51.956658128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:48:51.957533 containerd[1502]: time="2025-03-17T17:48:51.957160329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:48:51.957533 containerd[1502]: time="2025-03-17T17:48:51.957316089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:51.957533 containerd[1502]: time="2025-03-17T17:48:51.957422169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:51.985040 systemd[1]: Started cri-containerd-8a86be92fe53e9729880c65403b08fba46f267070f17a399fc8d4827bf021e3d.scope - libcontainer container 8a86be92fe53e9729880c65403b08fba46f267070f17a399fc8d4827bf021e3d. Mar 17 17:48:51.998601 kubelet[2846]: I0317 17:48:51.998551 2846 topology_manager.go:215] "Topology Admit Handler" podUID="8649dfee-cbea-4f1b-81e2-c3a36ca0da0f" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-285h7" Mar 17 17:48:52.007472 systemd[1]: Created slice kubepods-besteffort-pod8649dfee_cbea_4f1b_81e2_c3a36ca0da0f.slice - libcontainer container kubepods-besteffort-pod8649dfee_cbea_4f1b_81e2_c3a36ca0da0f.slice. Mar 17 17:48:52.068880 kubelet[2846]: I0317 17:48:52.068555 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8649dfee-cbea-4f1b-81e2-c3a36ca0da0f-var-lib-calico\") pod \"tigera-operator-6479d6dc54-285h7\" (UID: \"8649dfee-cbea-4f1b-81e2-c3a36ca0da0f\") " pod="tigera-operator/tigera-operator-6479d6dc54-285h7" Mar 17 17:48:52.068880 kubelet[2846]: I0317 17:48:52.068618 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67xm\" (UniqueName: \"kubernetes.io/projected/8649dfee-cbea-4f1b-81e2-c3a36ca0da0f-kube-api-access-s67xm\") pod \"tigera-operator-6479d6dc54-285h7\" (UID: \"8649dfee-cbea-4f1b-81e2-c3a36ca0da0f\") " pod="tigera-operator/tigera-operator-6479d6dc54-285h7" Mar 17 17:48:52.070170 containerd[1502]: time="2025-03-17T17:48:52.070121902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpjtv,Uid:dead0cd9-9902-48c1-9531-0dd4909b2da5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a86be92fe53e9729880c65403b08fba46f267070f17a399fc8d4827bf021e3d\"" Mar 17 17:48:52.074347 containerd[1502]: time="2025-03-17T17:48:52.074300033Z" level=info msg="CreateContainer within sandbox \"8a86be92fe53e9729880c65403b08fba46f267070f17a399fc8d4827bf021e3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:48:52.092169 containerd[1502]: time="2025-03-17T17:48:52.092126839Z" level=info msg="CreateContainer within sandbox \"8a86be92fe53e9729880c65403b08fba46f267070f17a399fc8d4827bf021e3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9816bf3e554b1ad85ceebdcfac28feec322096563f65ecd3782ece0805111e05\"" Mar 17 17:48:52.094838 containerd[1502]: time="2025-03-17T17:48:52.093187522Z" level=info msg="StartContainer for \"9816bf3e554b1ad85ceebdcfac28feec322096563f65ecd3782ece0805111e05\"" Mar 17 17:48:52.125585 systemd[1]: Started cri-containerd-9816bf3e554b1ad85ceebdcfac28feec322096563f65ecd3782ece0805111e05.scope - libcontainer container 9816bf3e554b1ad85ceebdcfac28feec322096563f65ecd3782ece0805111e05. Mar 17 17:48:52.159138 containerd[1502]: time="2025-03-17T17:48:52.159000250Z" level=info msg="StartContainer for \"9816bf3e554b1ad85ceebdcfac28feec322096563f65ecd3782ece0805111e05\" returns successfully" Mar 17 17:48:52.283322 kubelet[2846]: I0317 17:48:52.283243 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jpjtv" podStartSLOduration=1.283221129 podStartE2EDuration="1.283221129s" podCreationTimestamp="2025-03-17 17:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:48:52.283086408 +0000 UTC m=+16.213363122" watchObservedRunningTime="2025-03-17 17:48:52.283221129 +0000 UTC m=+16.213497843" Mar 17 17:48:52.313904 containerd[1502]: time="2025-03-17T17:48:52.313342886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-285h7,Uid:8649dfee-cbea-4f1b-81e2-c3a36ca0da0f,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:48:52.337026 containerd[1502]: time="2025-03-17T17:48:52.336661026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:48:52.337026 containerd[1502]: time="2025-03-17T17:48:52.336721546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:48:52.337026 containerd[1502]: time="2025-03-17T17:48:52.336737426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:52.337026 containerd[1502]: time="2025-03-17T17:48:52.336815946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:52.355636 systemd[1]: Started cri-containerd-e15f0a0a86ac615b7f0ff915a6f70e42b369b1a6e56ddc225f5ccc6f57a94df7.scope - libcontainer container e15f0a0a86ac615b7f0ff915a6f70e42b369b1a6e56ddc225f5ccc6f57a94df7. Mar 17 17:48:52.397341 containerd[1502]: time="2025-03-17T17:48:52.396675139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-285h7,Uid:8649dfee-cbea-4f1b-81e2-c3a36ca0da0f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e15f0a0a86ac615b7f0ff915a6f70e42b369b1a6e56ddc225f5ccc6f57a94df7\"" Mar 17 17:48:52.400097 containerd[1502]: time="2025-03-17T17:48:52.400012788Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:48:52.787952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099771222.mount: Deactivated successfully. Mar 17 17:48:57.697471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123226395.mount: Deactivated successfully. Mar 17 17:48:58.153550 containerd[1502]: time="2025-03-17T17:48:58.153500971Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:58.155273 containerd[1502]: time="2025-03-17T17:48:58.155199023Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 17 17:48:58.155879 containerd[1502]: time="2025-03-17T17:48:58.155835347Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:58.160511 containerd[1502]: time="2025-03-17T17:48:58.160453220Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:48:58.161228 containerd[1502]: time="2025-03-17T17:48:58.161062224Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 5.761012116s" Mar 17 17:48:58.161228 containerd[1502]: time="2025-03-17T17:48:58.161100944Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 17 17:48:58.164425 containerd[1502]: time="2025-03-17T17:48:58.164234206Z" level=info msg="CreateContainer within sandbox \"e15f0a0a86ac615b7f0ff915a6f70e42b369b1a6e56ddc225f5ccc6f57a94df7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:48:58.176826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541420525.mount: Deactivated successfully. Mar 17 17:48:58.181388 containerd[1502]: time="2025-03-17T17:48:58.181272806Z" level=info msg="CreateContainer within sandbox \"e15f0a0a86ac615b7f0ff915a6f70e42b369b1a6e56ddc225f5ccc6f57a94df7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a\"" Mar 17 17:48:58.183216 containerd[1502]: time="2025-03-17T17:48:58.183177060Z" level=info msg="StartContainer for \"f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a\"" Mar 17 17:48:58.215597 systemd[1]: Started cri-containerd-f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a.scope - libcontainer container f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a. Mar 17 17:48:58.244606 containerd[1502]: time="2025-03-17T17:48:58.244484891Z" level=info msg="StartContainer for \"f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a\" returns successfully" Mar 17 17:49:02.924926 kubelet[2846]: I0317 17:49:02.924856 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-285h7" podStartSLOduration=6.161519593 podStartE2EDuration="11.924825879s" podCreationTimestamp="2025-03-17 17:48:51 +0000 UTC" firstStartedPulling="2025-03-17 17:48:52.398712865 +0000 UTC m=+16.328989539" lastFinishedPulling="2025-03-17 17:48:58.162019111 +0000 UTC m=+22.092295825" observedRunningTime="2025-03-17 17:48:58.30825198 +0000 UTC m=+22.238528694" watchObservedRunningTime="2025-03-17 17:49:02.924825879 +0000 UTC m=+26.855102593" Mar 17 17:49:02.925997 kubelet[2846]: I0317 17:49:02.925632 2846 topology_manager.go:215] "Topology Admit Handler" podUID="ce1946e2-ace6-4124-9136-04c527b79ec8" podNamespace="calico-system" podName="calico-typha-686656c9f9-zv5sd" Mar 17 17:49:02.934602 systemd[1]: Created slice kubepods-besteffort-podce1946e2_ace6_4124_9136_04c527b79ec8.slice - libcontainer container kubepods-besteffort-podce1946e2_ace6_4124_9136_04c527b79ec8.slice. Mar 17 17:49:02.937321 kubelet[2846]: W0317 17:49:02.937182 2846 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-1-0-b-a06069b96b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4230-1-0-b-a06069b96b' and this object Mar 17 17:49:02.937321 kubelet[2846]: E0317 17:49:02.937233 2846 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-1-0-b-a06069b96b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4230-1-0-b-a06069b96b' and this object Mar 17 17:49:02.937321 kubelet[2846]: W0317 17:49:02.937200 2846 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4230-1-0-b-a06069b96b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4230-1-0-b-a06069b96b' and this object Mar 17 17:49:02.937321 kubelet[2846]: E0317 17:49:02.937253 2846 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4230-1-0-b-a06069b96b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4230-1-0-b-a06069b96b' and this object Mar 17 17:49:02.937890 kubelet[2846]: W0317 17:49:02.937493 2846 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4230-1-0-b-a06069b96b" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4230-1-0-b-a06069b96b' and this object Mar 17 17:49:02.937890 kubelet[2846]: E0317 17:49:02.937515 2846 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4230-1-0-b-a06069b96b" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4230-1-0-b-a06069b96b' and this object Mar 17 17:49:03.032297 kubelet[2846]: I0317 17:49:03.031518 2846 topology_manager.go:215] "Topology Admit Handler" podUID="c486f367-e8f1-4495-8302-4a805d81fa28" podNamespace="calico-system" podName="calico-node-jsxrm" Mar 17 17:49:03.041397 kubelet[2846]: I0317 17:49:03.040455 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1946e2-ace6-4124-9136-04c527b79ec8-tigera-ca-bundle\") pod \"calico-typha-686656c9f9-zv5sd\" (UID: \"ce1946e2-ace6-4124-9136-04c527b79ec8\") " pod="calico-system/calico-typha-686656c9f9-zv5sd" Mar 17 17:49:03.041397 kubelet[2846]: I0317 17:49:03.040497 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9c75\" (UniqueName: \"kubernetes.io/projected/ce1946e2-ace6-4124-9136-04c527b79ec8-kube-api-access-h9c75\") pod \"calico-typha-686656c9f9-zv5sd\" (UID: \"ce1946e2-ace6-4124-9136-04c527b79ec8\") " pod="calico-system/calico-typha-686656c9f9-zv5sd" Mar 17 17:49:03.041397 kubelet[2846]: I0317 17:49:03.040519 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ce1946e2-ace6-4124-9136-04c527b79ec8-typha-certs\") pod \"calico-typha-686656c9f9-zv5sd\" (UID: \"ce1946e2-ace6-4124-9136-04c527b79ec8\") " pod="calico-system/calico-typha-686656c9f9-zv5sd" Mar 17 17:49:03.043068 systemd[1]: Created slice kubepods-besteffort-podc486f367_e8f1_4495_8302_4a805d81fa28.slice - libcontainer container kubepods-besteffort-podc486f367_e8f1_4495_8302_4a805d81fa28.slice. Mar 17 17:49:03.141568 kubelet[2846]: I0317 17:49:03.141073 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-lib-modules\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141568 kubelet[2846]: I0317 17:49:03.141127 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-run-calico\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141568 kubelet[2846]: I0317 17:49:03.141152 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-flexvol-driver-host\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141568 kubelet[2846]: I0317 17:49:03.141172 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-bin-dir\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141568 kubelet[2846]: I0317 17:49:03.141188 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-net-dir\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141789 kubelet[2846]: I0317 17:49:03.141204 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-policysync\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141789 kubelet[2846]: I0317 17:49:03.141218 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c486f367-e8f1-4495-8302-4a805d81fa28-tigera-ca-bundle\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141789 kubelet[2846]: I0317 17:49:03.141240 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-xtables-lock\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141789 kubelet[2846]: I0317 17:49:03.141254 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c486f367-e8f1-4495-8302-4a805d81fa28-node-certs\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141789 kubelet[2846]: I0317 17:49:03.141272 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-lib-calico\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141890 kubelet[2846]: I0317 17:49:03.141298 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-log-dir\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.141890 kubelet[2846]: I0317 17:49:03.141314 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86hvn\" (UniqueName: \"kubernetes.io/projected/c486f367-e8f1-4495-8302-4a805d81fa28-kube-api-access-86hvn\") pod \"calico-node-jsxrm\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " pod="calico-system/calico-node-jsxrm" Mar 17 17:49:03.143439 kubelet[2846]: I0317 17:49:03.142591 2846 topology_manager.go:215] "Topology Admit Handler" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" podNamespace="calico-system" podName="csi-node-driver-ht6qb" Mar 17 17:49:03.143439 kubelet[2846]: E0317 17:49:03.142860 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:03.246509 kubelet[2846]: I0317 17:49:03.241787 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37c19387-6a1a-435e-b624-cd3e3f772523-socket-dir\") pod \"csi-node-driver-ht6qb\" (UID: \"37c19387-6a1a-435e-b624-cd3e3f772523\") " pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:03.246509 kubelet[2846]: I0317 17:49:03.241920 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37c19387-6a1a-435e-b624-cd3e3f772523-kubelet-dir\") pod \"csi-node-driver-ht6qb\" (UID: \"37c19387-6a1a-435e-b624-cd3e3f772523\") " pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:03.246509 kubelet[2846]: I0317 17:49:03.241996 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37c19387-6a1a-435e-b624-cd3e3f772523-registration-dir\") pod \"csi-node-driver-ht6qb\" (UID: \"37c19387-6a1a-435e-b624-cd3e3f772523\") " pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:03.246509 kubelet[2846]: I0317 17:49:03.242070 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/37c19387-6a1a-435e-b624-cd3e3f772523-varrun\") pod \"csi-node-driver-ht6qb\" (UID: \"37c19387-6a1a-435e-b624-cd3e3f772523\") " pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:03.246509 kubelet[2846]: I0317 17:49:03.242150 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm52h\" (UniqueName: \"kubernetes.io/projected/37c19387-6a1a-435e-b624-cd3e3f772523-kube-api-access-jm52h\") pod \"csi-node-driver-ht6qb\" (UID: \"37c19387-6a1a-435e-b624-cd3e3f772523\") " pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:03.247547 kubelet[2846]: E0317 17:49:03.247519 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.247762 kubelet[2846]: W0317 17:49:03.247741 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.248075 kubelet[2846]: E0317 17:49:03.248051 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.343059 kubelet[2846]: E0317 17:49:03.342997 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.343059 kubelet[2846]: W0317 17:49:03.343021 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.343059 kubelet[2846]: E0317 17:49:03.343041 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.343534 kubelet[2846]: E0317 17:49:03.343226 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.343534 kubelet[2846]: W0317 17:49:03.343234 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.343534 kubelet[2846]: E0317 17:49:03.343243 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.343874 kubelet[2846]: E0317 17:49:03.343812 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.343874 kubelet[2846]: W0317 17:49:03.343822 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.343874 kubelet[2846]: E0317 17:49:03.343835 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.344455 kubelet[2846]: E0317 17:49:03.344045 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.344455 kubelet[2846]: W0317 17:49:03.344057 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.344455 kubelet[2846]: E0317 17:49:03.344072 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.344455 kubelet[2846]: E0317 17:49:03.344294 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.344455 kubelet[2846]: W0317 17:49:03.344302 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.344455 kubelet[2846]: E0317 17:49:03.344310 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.345042 kubelet[2846]: E0317 17:49:03.344474 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.345042 kubelet[2846]: W0317 17:49:03.344482 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.345042 kubelet[2846]: E0317 17:49:03.344492 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.345042 kubelet[2846]: E0317 17:49:03.344843 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.345042 kubelet[2846]: W0317 17:49:03.344850 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.345042 kubelet[2846]: E0317 17:49:03.344992 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.345042 kubelet[2846]: W0317 17:49:03.345002 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.345342 kubelet[2846]: E0317 17:49:03.345127 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.345342 kubelet[2846]: W0317 17:49:03.345133 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.345342 kubelet[2846]: E0317 17:49:03.345142 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.345342 kubelet[2846]: E0317 17:49:03.345263 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.345342 kubelet[2846]: W0317 17:49:03.345270 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.345342 kubelet[2846]: E0317 17:49:03.345277 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.345809 kubelet[2846]: E0317 17:49:03.345440 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.345809 kubelet[2846]: E0317 17:49:03.345520 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.346668 kubelet[2846]: E0317 17:49:03.346635 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.346668 kubelet[2846]: W0317 17:49:03.346648 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.346668 kubelet[2846]: E0317 17:49:03.346663 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.347138 kubelet[2846]: E0317 17:49:03.346870 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.347138 kubelet[2846]: W0317 17:49:03.346878 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.347138 kubelet[2846]: E0317 17:49:03.346938 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.347482 kubelet[2846]: E0317 17:49:03.347465 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.347482 kubelet[2846]: W0317 17:49:03.347479 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.347650 kubelet[2846]: E0317 17:49:03.347641 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.347650 kubelet[2846]: W0317 17:49:03.347650 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.347897 kubelet[2846]: E0317 17:49:03.347726 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.348252 kubelet[2846]: E0317 17:49:03.347886 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.348252 kubelet[2846]: W0317 17:49:03.348098 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.348252 kubelet[2846]: E0317 17:49:03.348116 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.348498 kubelet[2846]: E0317 17:49:03.347894 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.348921 kubelet[2846]: E0317 17:49:03.348776 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.348921 kubelet[2846]: W0317 17:49:03.348794 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.348921 kubelet[2846]: E0317 17:49:03.348819 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.349066 kubelet[2846]: E0317 17:49:03.349039 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.349066 kubelet[2846]: W0317 17:49:03.349050 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.349066 kubelet[2846]: E0317 17:49:03.349062 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.349324 kubelet[2846]: E0317 17:49:03.349260 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.349324 kubelet[2846]: W0317 17:49:03.349267 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.349324 kubelet[2846]: E0317 17:49:03.349275 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.349530 kubelet[2846]: E0317 17:49:03.349490 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.349530 kubelet[2846]: W0317 17:49:03.349499 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.349530 kubelet[2846]: E0317 17:49:03.349509 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.349701 kubelet[2846]: E0317 17:49:03.349670 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.349701 kubelet[2846]: W0317 17:49:03.349683 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.349701 kubelet[2846]: E0317 17:49:03.349700 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.349903 kubelet[2846]: E0317 17:49:03.349889 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.349903 kubelet[2846]: W0317 17:49:03.349896 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.349977 kubelet[2846]: E0317 17:49:03.349910 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.350148 kubelet[2846]: E0317 17:49:03.350135 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.350283 kubelet[2846]: W0317 17:49:03.350148 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.350283 kubelet[2846]: E0317 17:49:03.350205 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.350494 kubelet[2846]: E0317 17:49:03.350482 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.350494 kubelet[2846]: W0317 17:49:03.350492 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.350598 kubelet[2846]: E0317 17:49:03.350507 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.350670 kubelet[2846]: E0317 17:49:03.350657 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.350670 kubelet[2846]: W0317 17:49:03.350669 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.350845 kubelet[2846]: E0317 17:49:03.350683 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.350885 kubelet[2846]: E0317 17:49:03.350872 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.350885 kubelet[2846]: W0317 17:49:03.350879 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.351061 kubelet[2846]: E0317 17:49:03.350888 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.351089 kubelet[2846]: E0317 17:49:03.351073 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.351089 kubelet[2846]: W0317 17:49:03.351082 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.351133 kubelet[2846]: E0317 17:49:03.351091 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.351249 kubelet[2846]: E0317 17:49:03.351238 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.351249 kubelet[2846]: W0317 17:49:03.351248 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.351316 kubelet[2846]: E0317 17:49:03.351256 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.351431 kubelet[2846]: E0317 17:49:03.351420 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.351431 kubelet[2846]: W0317 17:49:03.351431 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.351514 kubelet[2846]: E0317 17:49:03.351440 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.351696 kubelet[2846]: E0317 17:49:03.351685 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.351696 kubelet[2846]: W0317 17:49:03.351695 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.351780 kubelet[2846]: E0317 17:49:03.351705 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.351862 kubelet[2846]: E0317 17:49:03.351848 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.351862 kubelet[2846]: W0317 17:49:03.351860 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.351916 kubelet[2846]: E0317 17:49:03.351869 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.450891 kubelet[2846]: E0317 17:49:03.450863 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.450891 kubelet[2846]: W0317 17:49:03.450887 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.451212 kubelet[2846]: E0317 17:49:03.450908 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.452219 kubelet[2846]: E0317 17:49:03.452198 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.452429 kubelet[2846]: W0317 17:49:03.452267 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.452429 kubelet[2846]: E0317 17:49:03.452287 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.452754 kubelet[2846]: E0317 17:49:03.452646 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.452754 kubelet[2846]: W0317 17:49:03.452659 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.452754 kubelet[2846]: E0317 17:49:03.452670 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.453124 kubelet[2846]: E0317 17:49:03.453043 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.453124 kubelet[2846]: W0317 17:49:03.453057 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.453124 kubelet[2846]: E0317 17:49:03.453070 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.453644 kubelet[2846]: E0317 17:49:03.453468 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.453644 kubelet[2846]: W0317 17:49:03.453481 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.453644 kubelet[2846]: E0317 17:49:03.453493 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.453887 kubelet[2846]: E0317 17:49:03.453811 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.453887 kubelet[2846]: W0317 17:49:03.453823 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.453887 kubelet[2846]: E0317 17:49:03.453836 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.554830 kubelet[2846]: E0317 17:49:03.554594 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.554830 kubelet[2846]: W0317 17:49:03.554620 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.554830 kubelet[2846]: E0317 17:49:03.554643 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.556039 kubelet[2846]: E0317 17:49:03.555225 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.556039 kubelet[2846]: W0317 17:49:03.555243 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.556039 kubelet[2846]: E0317 17:49:03.555259 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.556594 kubelet[2846]: E0317 17:49:03.556439 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.556594 kubelet[2846]: W0317 17:49:03.556457 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.556594 kubelet[2846]: E0317 17:49:03.556473 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.558717 kubelet[2846]: E0317 17:49:03.558512 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.558717 kubelet[2846]: W0317 17:49:03.558537 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.558717 kubelet[2846]: E0317 17:49:03.558558 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.559237 kubelet[2846]: E0317 17:49:03.559090 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.559237 kubelet[2846]: W0317 17:49:03.559105 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.559237 kubelet[2846]: E0317 17:49:03.559121 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.559565 kubelet[2846]: E0317 17:49:03.559496 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.559565 kubelet[2846]: W0317 17:49:03.559511 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.559565 kubelet[2846]: E0317 17:49:03.559524 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.661452 kubelet[2846]: E0317 17:49:03.661203 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.661452 kubelet[2846]: W0317 17:49:03.661325 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.661452 kubelet[2846]: E0317 17:49:03.661348 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.662574 kubelet[2846]: E0317 17:49:03.662401 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.662574 kubelet[2846]: W0317 17:49:03.662422 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.662574 kubelet[2846]: E0317 17:49:03.662439 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.663301 kubelet[2846]: E0317 17:49:03.663140 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.663301 kubelet[2846]: W0317 17:49:03.663158 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.663301 kubelet[2846]: E0317 17:49:03.663174 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.663810 kubelet[2846]: E0317 17:49:03.663693 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.663810 kubelet[2846]: W0317 17:49:03.663724 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.663810 kubelet[2846]: E0317 17:49:03.663738 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.664443 kubelet[2846]: E0317 17:49:03.664322 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.664443 kubelet[2846]: W0317 17:49:03.664336 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.664443 kubelet[2846]: E0317 17:49:03.664378 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.666631 kubelet[2846]: E0317 17:49:03.666566 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.666631 kubelet[2846]: W0317 17:49:03.666583 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.666631 kubelet[2846]: E0317 17:49:03.666596 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.768606 kubelet[2846]: E0317 17:49:03.768578 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.768862 kubelet[2846]: W0317 17:49:03.768765 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.768862 kubelet[2846]: E0317 17:49:03.768791 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.769253 kubelet[2846]: E0317 17:49:03.769241 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.769373 kubelet[2846]: W0317 17:49:03.769311 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.769373 kubelet[2846]: E0317 17:49:03.769329 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.770399 kubelet[2846]: E0317 17:49:03.770254 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.770399 kubelet[2846]: W0317 17:49:03.770268 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.770399 kubelet[2846]: E0317 17:49:03.770280 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.770629 kubelet[2846]: E0317 17:49:03.770556 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.770629 kubelet[2846]: W0317 17:49:03.770566 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.770629 kubelet[2846]: E0317 17:49:03.770577 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.771554 kubelet[2846]: E0317 17:49:03.771514 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.771554 kubelet[2846]: W0317 17:49:03.771537 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.771764 kubelet[2846]: E0317 17:49:03.771554 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.771838 kubelet[2846]: E0317 17:49:03.771819 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.771838 kubelet[2846]: W0317 17:49:03.771835 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.771906 kubelet[2846]: E0317 17:49:03.771850 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.873564 kubelet[2846]: E0317 17:49:03.873464 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.873564 kubelet[2846]: W0317 17:49:03.873485 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.873564 kubelet[2846]: E0317 17:49:03.873505 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.874851 kubelet[2846]: E0317 17:49:03.874446 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.874851 kubelet[2846]: W0317 17:49:03.874469 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.874851 kubelet[2846]: E0317 17:49:03.874511 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.875236 kubelet[2846]: E0317 17:49:03.875199 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.875236 kubelet[2846]: W0317 17:49:03.875215 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.875236 kubelet[2846]: E0317 17:49:03.875227 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.875559 kubelet[2846]: E0317 17:49:03.875497 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.875559 kubelet[2846]: W0317 17:49:03.875506 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.875559 kubelet[2846]: E0317 17:49:03.875516 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.875792 kubelet[2846]: E0317 17:49:03.875778 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.875792 kubelet[2846]: W0317 17:49:03.875789 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.875867 kubelet[2846]: E0317 17:49:03.875798 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.876725 kubelet[2846]: E0317 17:49:03.876610 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.876725 kubelet[2846]: W0317 17:49:03.876648 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.876725 kubelet[2846]: E0317 17:49:03.876670 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.918120 kubelet[2846]: E0317 17:49:03.914806 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.918120 kubelet[2846]: W0317 17:49:03.914827 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.918120 kubelet[2846]: E0317 17:49:03.914849 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.921776 kubelet[2846]: E0317 17:49:03.921750 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.922154 kubelet[2846]: W0317 17:49:03.922091 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.923200 kubelet[2846]: E0317 17:49:03.923166 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.923331 kubelet[2846]: E0317 17:49:03.923245 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.923430 kubelet[2846]: W0317 17:49:03.923415 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.923483 kubelet[2846]: E0317 17:49:03.923473 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.978244 kubelet[2846]: E0317 17:49:03.978202 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.979281 kubelet[2846]: W0317 17:49:03.978871 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.979281 kubelet[2846]: E0317 17:49:03.978921 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.979571 kubelet[2846]: E0317 17:49:03.979554 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.979644 kubelet[2846]: W0317 17:49:03.979629 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.979720 kubelet[2846]: E0317 17:49:03.979706 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:03.980082 kubelet[2846]: E0317 17:49:03.980064 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:03.980174 kubelet[2846]: W0317 17:49:03.980159 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:03.980281 kubelet[2846]: E0317 17:49:03.980234 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.081607 kubelet[2846]: E0317 17:49:04.081566 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.081607 kubelet[2846]: W0317 17:49:04.081607 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.081783 kubelet[2846]: E0317 17:49:04.081639 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.082041 kubelet[2846]: E0317 17:49:04.082020 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.082101 kubelet[2846]: W0317 17:49:04.082044 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.082101 kubelet[2846]: E0317 17:49:04.082065 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.082496 kubelet[2846]: E0317 17:49:04.082474 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.082545 kubelet[2846]: W0317 17:49:04.082499 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.082545 kubelet[2846]: E0317 17:49:04.082521 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.090247 kubelet[2846]: E0317 17:49:04.090102 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.090247 kubelet[2846]: W0317 17:49:04.090122 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.090247 kubelet[2846]: E0317 17:49:04.090140 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.144875 kubelet[2846]: E0317 17:49:04.144419 2846 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:49:04.144875 kubelet[2846]: E0317 17:49:04.144541 2846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ce1946e2-ace6-4124-9136-04c527b79ec8-tigera-ca-bundle podName:ce1946e2-ace6-4124-9136-04c527b79ec8 nodeName:}" failed. No retries permitted until 2025-03-17 17:49:04.644507712 +0000 UTC m=+28.574784426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/ce1946e2-ace6-4124-9136-04c527b79ec8-tigera-ca-bundle") pod "calico-typha-686656c9f9-zv5sd" (UID: "ce1946e2-ace6-4124-9136-04c527b79ec8") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:49:04.183773 kubelet[2846]: E0317 17:49:04.183733 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.183773 kubelet[2846]: W0317 17:49:04.183770 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.183956 kubelet[2846]: E0317 17:49:04.183821 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.184682 kubelet[2846]: E0317 17:49:04.184256 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.184682 kubelet[2846]: W0317 17:49:04.184281 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.184682 kubelet[2846]: E0317 17:49:04.184376 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.242421 kubelet[2846]: E0317 17:49:04.242384 2846 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:49:04.242764 kubelet[2846]: E0317 17:49:04.242606 2846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c486f367-e8f1-4495-8302-4a805d81fa28-tigera-ca-bundle podName:c486f367-e8f1-4495-8302-4a805d81fa28 nodeName:}" failed. No retries permitted until 2025-03-17 17:49:04.742584644 +0000 UTC m=+28.672861318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/c486f367-e8f1-4495-8302-4a805d81fa28-tigera-ca-bundle") pod "calico-node-jsxrm" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:49:04.285921 kubelet[2846]: E0317 17:49:04.285702 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.285921 kubelet[2846]: W0317 17:49:04.285737 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.285921 kubelet[2846]: E0317 17:49:04.285765 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.286439 kubelet[2846]: E0317 17:49:04.286273 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.286439 kubelet[2846]: W0317 17:49:04.286308 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.286439 kubelet[2846]: E0317 17:49:04.286328 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.387942 kubelet[2846]: E0317 17:49:04.387896 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.387942 kubelet[2846]: W0317 17:49:04.387930 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.387942 kubelet[2846]: E0317 17:49:04.387987 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.388564 kubelet[2846]: E0317 17:49:04.388339 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.388564 kubelet[2846]: W0317 17:49:04.388429 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.388564 kubelet[2846]: E0317 17:49:04.388451 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.489544 kubelet[2846]: E0317 17:49:04.489434 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.489544 kubelet[2846]: W0317 17:49:04.489496 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.489544 kubelet[2846]: E0317 17:49:04.489530 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.490204 kubelet[2846]: E0317 17:49:04.490158 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.490204 kubelet[2846]: W0317 17:49:04.490176 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.490204 kubelet[2846]: E0317 17:49:04.490191 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.590902 kubelet[2846]: E0317 17:49:04.590872 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.590902 kubelet[2846]: W0317 17:49:04.590895 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.591113 kubelet[2846]: E0317 17:49:04.590915 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.591306 kubelet[2846]: E0317 17:49:04.591287 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.591306 kubelet[2846]: W0317 17:49:04.591302 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.591386 kubelet[2846]: E0317 17:49:04.591314 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.692954 kubelet[2846]: E0317 17:49:04.692862 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.692954 kubelet[2846]: W0317 17:49:04.692887 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.692954 kubelet[2846]: E0317 17:49:04.692908 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.693257 kubelet[2846]: E0317 17:49:04.693141 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.693257 kubelet[2846]: W0317 17:49:04.693154 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.693257 kubelet[2846]: E0317 17:49:04.693177 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.693546 kubelet[2846]: E0317 17:49:04.693527 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.693603 kubelet[2846]: W0317 17:49:04.693547 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.693603 kubelet[2846]: E0317 17:49:04.693561 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.693896 kubelet[2846]: E0317 17:49:04.693782 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.693896 kubelet[2846]: W0317 17:49:04.693793 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.693896 kubelet[2846]: E0317 17:49:04.693814 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.694292 kubelet[2846]: E0317 17:49:04.694138 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.694292 kubelet[2846]: W0317 17:49:04.694157 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.694292 kubelet[2846]: E0317 17:49:04.694174 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.694649 kubelet[2846]: E0317 17:49:04.694429 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.694649 kubelet[2846]: W0317 17:49:04.694441 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.694649 kubelet[2846]: E0317 17:49:04.694496 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.696203 kubelet[2846]: E0317 17:49:04.696180 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.696203 kubelet[2846]: W0317 17:49:04.696197 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.696203 kubelet[2846]: E0317 17:49:04.696209 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.741309 containerd[1502]: time="2025-03-17T17:49:04.740805911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686656c9f9-zv5sd,Uid:ce1946e2-ace6-4124-9136-04c527b79ec8,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:04.770268 containerd[1502]: time="2025-03-17T17:49:04.769901583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:04.770268 containerd[1502]: time="2025-03-17T17:49:04.770004984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:04.770571 containerd[1502]: time="2025-03-17T17:49:04.770048104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:04.771605 containerd[1502]: time="2025-03-17T17:49:04.771531640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:04.794258 kubelet[2846]: E0317 17:49:04.794228 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.794258 kubelet[2846]: W0317 17:49:04.794252 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.794455 kubelet[2846]: E0317 17:49:04.794274 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.794585 kubelet[2846]: E0317 17:49:04.794572 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.794585 kubelet[2846]: W0317 17:49:04.794583 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.794677 kubelet[2846]: E0317 17:49:04.794593 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.794787 kubelet[2846]: E0317 17:49:04.794775 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.794814 kubelet[2846]: W0317 17:49:04.794787 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.794814 kubelet[2846]: E0317 17:49:04.794796 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.794940 kubelet[2846]: E0317 17:49:04.794928 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.794940 kubelet[2846]: W0317 17:49:04.794939 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.795076 kubelet[2846]: E0317 17:49:04.794947 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.795168 kubelet[2846]: E0317 17:49:04.795155 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.795196 kubelet[2846]: W0317 17:49:04.795168 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.795196 kubelet[2846]: E0317 17:49:04.795179 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.796053 systemd[1]: Started cri-containerd-93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c.scope - libcontainer container 93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c. Mar 17 17:49:04.796603 kubelet[2846]: E0317 17:49:04.796582 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:04.796603 kubelet[2846]: W0317 17:49:04.796600 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:04.796668 kubelet[2846]: E0317 17:49:04.796614 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:04.835751 containerd[1502]: time="2025-03-17T17:49:04.835676529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686656c9f9-zv5sd,Uid:ce1946e2-ace6-4124-9136-04c527b79ec8,Namespace:calico-system,Attempt:0,} returns sandbox id \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\"" Mar 17 17:49:04.839401 containerd[1502]: time="2025-03-17T17:49:04.839133326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:49:04.848627 containerd[1502]: time="2025-03-17T17:49:04.848510026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jsxrm,Uid:c486f367-e8f1-4495-8302-4a805d81fa28,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:04.875256 containerd[1502]: time="2025-03-17T17:49:04.875113632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:04.875256 containerd[1502]: time="2025-03-17T17:49:04.875192113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:04.875256 containerd[1502]: time="2025-03-17T17:49:04.875214353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:04.875469 containerd[1502]: time="2025-03-17T17:49:04.875306354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:04.892865 systemd[1]: Started cri-containerd-8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f.scope - libcontainer container 8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f. Mar 17 17:49:04.916689 containerd[1502]: time="2025-03-17T17:49:04.916575437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jsxrm,Uid:c486f367-e8f1-4495-8302-4a805d81fa28,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\"" Mar 17 17:49:05.181566 kubelet[2846]: E0317 17:49:05.181371 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:06.819709 containerd[1502]: time="2025-03-17T17:49:06.818919569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:06.822261 containerd[1502]: time="2025-03-17T17:49:06.822234728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 17 17:49:06.823136 containerd[1502]: time="2025-03-17T17:49:06.823080658Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:06.826079 containerd[1502]: time="2025-03-17T17:49:06.826020813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:06.827056 containerd[1502]: time="2025-03-17T17:49:06.826995064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.986617965s" Mar 17 17:49:06.827163 containerd[1502]: time="2025-03-17T17:49:06.827145346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 17 17:49:06.830919 containerd[1502]: time="2025-03-17T17:49:06.830882270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:49:06.847320 containerd[1502]: time="2025-03-17T17:49:06.846824378Z" level=info msg="CreateContainer within sandbox \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:49:06.865588 containerd[1502]: time="2025-03-17T17:49:06.865501199Z" level=info msg="CreateContainer within sandbox \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\"" Mar 17 17:49:06.866821 containerd[1502]: time="2025-03-17T17:49:06.866764574Z" level=info msg="StartContainer for \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\"" Mar 17 17:49:06.903840 systemd[1]: Started cri-containerd-f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600.scope - libcontainer container f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600. Mar 17 17:49:06.951642 containerd[1502]: time="2025-03-17T17:49:06.951411214Z" level=info msg="StartContainer for \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\" returns successfully" Mar 17 17:49:07.181040 kubelet[2846]: E0317 17:49:07.180812 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:07.332817 kubelet[2846]: I0317 17:49:07.331735 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-686656c9f9-zv5sd" podStartSLOduration=3.341313791 podStartE2EDuration="5.331714558s" podCreationTimestamp="2025-03-17 17:49:02 +0000 UTC" firstStartedPulling="2025-03-17 17:49:04.838646041 +0000 UTC m=+28.768922715" lastFinishedPulling="2025-03-17 17:49:06.829046768 +0000 UTC m=+30.759323482" observedRunningTime="2025-03-17 17:49:07.330770746 +0000 UTC m=+31.261047460" watchObservedRunningTime="2025-03-17 17:49:07.331714558 +0000 UTC m=+31.261991272" Mar 17 17:49:07.353468 kubelet[2846]: E0317 17:49:07.352612 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.353468 kubelet[2846]: W0317 17:49:07.353397 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.353468 kubelet[2846]: E0317 17:49:07.353429 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.354085 kubelet[2846]: E0317 17:49:07.353958 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.354085 kubelet[2846]: W0317 17:49:07.353974 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.354085 kubelet[2846]: E0317 17:49:07.353999 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.354331 kubelet[2846]: E0317 17:49:07.354207 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.354331 kubelet[2846]: W0317 17:49:07.354216 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.354331 kubelet[2846]: E0317 17:49:07.354226 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.356103 kubelet[2846]: E0317 17:49:07.355952 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.356103 kubelet[2846]: W0317 17:49:07.355988 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.356103 kubelet[2846]: E0317 17:49:07.356004 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.356531 kubelet[2846]: E0317 17:49:07.356415 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.356531 kubelet[2846]: W0317 17:49:07.356428 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.356531 kubelet[2846]: E0317 17:49:07.356439 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.356815 kubelet[2846]: E0317 17:49:07.356652 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.356815 kubelet[2846]: W0317 17:49:07.356661 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.356815 kubelet[2846]: E0317 17:49:07.356671 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.357109 kubelet[2846]: E0317 17:49:07.357068 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.357109 kubelet[2846]: W0317 17:49:07.357081 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.357387 kubelet[2846]: E0317 17:49:07.357092 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.358448 kubelet[2846]: E0317 17:49:07.358213 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.358448 kubelet[2846]: W0317 17:49:07.358230 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.358448 kubelet[2846]: E0317 17:49:07.358245 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.359246 kubelet[2846]: E0317 17:49:07.358776 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.359246 kubelet[2846]: W0317 17:49:07.358788 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.359246 kubelet[2846]: E0317 17:49:07.358800 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.359700 kubelet[2846]: E0317 17:49:07.359591 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.359700 kubelet[2846]: W0317 17:49:07.359604 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.359700 kubelet[2846]: E0317 17:49:07.359622 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.360017 kubelet[2846]: E0317 17:49:07.359897 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.360017 kubelet[2846]: W0317 17:49:07.359908 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.360017 kubelet[2846]: E0317 17:49:07.359918 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.360462 kubelet[2846]: E0317 17:49:07.360331 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.360462 kubelet[2846]: W0317 17:49:07.360386 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.360462 kubelet[2846]: E0317 17:49:07.360398 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.360833 kubelet[2846]: E0317 17:49:07.360765 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.360833 kubelet[2846]: W0317 17:49:07.360776 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.360833 kubelet[2846]: E0317 17:49:07.360788 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.361199 kubelet[2846]: E0317 17:49:07.361158 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.361384 kubelet[2846]: W0317 17:49:07.361269 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.361384 kubelet[2846]: E0317 17:49:07.361299 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.361815 kubelet[2846]: E0317 17:49:07.361743 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.361815 kubelet[2846]: W0317 17:49:07.361754 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.361815 kubelet[2846]: E0317 17:49:07.361764 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.414591 kubelet[2846]: E0317 17:49:07.414547 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.414591 kubelet[2846]: W0317 17:49:07.414582 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.414750 kubelet[2846]: E0317 17:49:07.414609 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.414963 kubelet[2846]: E0317 17:49:07.414938 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.414963 kubelet[2846]: W0317 17:49:07.414957 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.415025 kubelet[2846]: E0317 17:49:07.414996 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.415331 kubelet[2846]: E0317 17:49:07.415306 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.415372 kubelet[2846]: W0317 17:49:07.415330 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.415457 kubelet[2846]: E0317 17:49:07.415437 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.415611 kubelet[2846]: E0317 17:49:07.415594 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.415643 kubelet[2846]: W0317 17:49:07.415610 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.415643 kubelet[2846]: E0317 17:49:07.415630 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.415886 kubelet[2846]: E0317 17:49:07.415867 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.415917 kubelet[2846]: W0317 17:49:07.415884 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.415917 kubelet[2846]: E0317 17:49:07.415907 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.416147 kubelet[2846]: E0317 17:49:07.416128 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.416147 kubelet[2846]: W0317 17:49:07.416145 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.416201 kubelet[2846]: E0317 17:49:07.416163 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.417364 kubelet[2846]: E0317 17:49:07.416437 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.417364 kubelet[2846]: W0317 17:49:07.416454 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.417364 kubelet[2846]: E0317 17:49:07.416553 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.417364 kubelet[2846]: E0317 17:49:07.416715 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.417364 kubelet[2846]: W0317 17:49:07.416727 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.417364 kubelet[2846]: E0317 17:49:07.416812 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.417364 kubelet[2846]: E0317 17:49:07.416958 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.417364 kubelet[2846]: W0317 17:49:07.416967 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.417364 kubelet[2846]: E0317 17:49:07.417096 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.417364 kubelet[2846]: E0317 17:49:07.417225 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.417602 kubelet[2846]: W0317 17:49:07.417235 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.417602 kubelet[2846]: E0317 17:49:07.417257 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.417909 kubelet[2846]: E0317 17:49:07.417880 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.417909 kubelet[2846]: W0317 17:49:07.417905 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.417958 kubelet[2846]: E0317 17:49:07.417925 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.418148 kubelet[2846]: E0317 17:49:07.418129 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.418175 kubelet[2846]: W0317 17:49:07.418148 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.418175 kubelet[2846]: E0317 17:49:07.418161 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.418648 kubelet[2846]: E0317 17:49:07.418625 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.419495 kubelet[2846]: W0317 17:49:07.419463 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.419641 kubelet[2846]: E0317 17:49:07.419610 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.420011 kubelet[2846]: E0317 17:49:07.419972 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.420011 kubelet[2846]: W0317 17:49:07.420001 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.420103 kubelet[2846]: E0317 17:49:07.420074 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.420260 kubelet[2846]: E0317 17:49:07.420194 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.420307 kubelet[2846]: W0317 17:49:07.420272 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.420456 kubelet[2846]: E0317 17:49:07.420374 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.420516 kubelet[2846]: E0317 17:49:07.420482 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.420516 kubelet[2846]: W0317 17:49:07.420489 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.420516 kubelet[2846]: E0317 17:49:07.420502 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.420670 kubelet[2846]: E0317 17:49:07.420658 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.420703 kubelet[2846]: W0317 17:49:07.420669 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.420703 kubelet[2846]: E0317 17:49:07.420679 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:07.421314 kubelet[2846]: E0317 17:49:07.421293 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:07.421314 kubelet[2846]: W0317 17:49:07.421309 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:07.421428 kubelet[2846]: E0317 17:49:07.421321 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.320805 kubelet[2846]: I0317 17:49:08.319735 2846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:08.368853 kubelet[2846]: E0317 17:49:08.368819 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.369167 kubelet[2846]: W0317 17:49:08.369142 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.369287 kubelet[2846]: E0317 17:49:08.369266 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.369679 kubelet[2846]: E0317 17:49:08.369661 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.369863 kubelet[2846]: W0317 17:49:08.369781 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.369863 kubelet[2846]: E0317 17:49:08.369807 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.370420 kubelet[2846]: E0317 17:49:08.370243 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.370420 kubelet[2846]: W0317 17:49:08.370262 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.370420 kubelet[2846]: E0317 17:49:08.370279 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.370773 kubelet[2846]: E0317 17:49:08.370670 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.370773 kubelet[2846]: W0317 17:49:08.370686 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.370773 kubelet[2846]: E0317 17:49:08.370702 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.371237 kubelet[2846]: E0317 17:49:08.371107 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.371237 kubelet[2846]: W0317 17:49:08.371121 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.371237 kubelet[2846]: E0317 17:49:08.371134 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.371579 kubelet[2846]: E0317 17:49:08.371459 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.371579 kubelet[2846]: W0317 17:49:08.371474 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.371579 kubelet[2846]: E0317 17:49:08.371486 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.371882 kubelet[2846]: E0317 17:49:08.371762 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.371882 kubelet[2846]: W0317 17:49:08.371775 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.371882 kubelet[2846]: E0317 17:49:08.371786 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.372175 kubelet[2846]: E0317 17:49:08.372091 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.372175 kubelet[2846]: W0317 17:49:08.372105 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.372175 kubelet[2846]: E0317 17:49:08.372118 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.372622 kubelet[2846]: E0317 17:49:08.372506 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.372622 kubelet[2846]: W0317 17:49:08.372520 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.372622 kubelet[2846]: E0317 17:49:08.372533 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.372920 kubelet[2846]: E0317 17:49:08.372813 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.372920 kubelet[2846]: W0317 17:49:08.372825 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.372920 kubelet[2846]: E0317 17:49:08.372835 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.373151 kubelet[2846]: E0317 17:49:08.373089 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.373151 kubelet[2846]: W0317 17:49:08.373100 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.373151 kubelet[2846]: E0317 17:49:08.373110 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.373672 kubelet[2846]: E0317 17:49:08.373545 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.373672 kubelet[2846]: W0317 17:49:08.373568 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.373672 kubelet[2846]: E0317 17:49:08.373590 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.374268 kubelet[2846]: E0317 17:49:08.374094 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.374268 kubelet[2846]: W0317 17:49:08.374113 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.374268 kubelet[2846]: E0317 17:49:08.374132 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.374688 kubelet[2846]: E0317 17:49:08.374563 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.374688 kubelet[2846]: W0317 17:49:08.374584 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.374688 kubelet[2846]: E0317 17:49:08.374604 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.375186 kubelet[2846]: E0317 17:49:08.375088 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.375186 kubelet[2846]: W0317 17:49:08.375108 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.375186 kubelet[2846]: E0317 17:49:08.375123 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.426487 kubelet[2846]: E0317 17:49:08.426123 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.426487 kubelet[2846]: W0317 17:49:08.426161 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.426487 kubelet[2846]: E0317 17:49:08.426198 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.426960 kubelet[2846]: E0317 17:49:08.426937 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.427249 kubelet[2846]: W0317 17:49:08.427102 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.427249 kubelet[2846]: E0317 17:49:08.427152 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.427620 kubelet[2846]: E0317 17:49:08.427523 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.427620 kubelet[2846]: W0317 17:49:08.427566 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.427620 kubelet[2846]: E0317 17:49:08.427592 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.428148 kubelet[2846]: E0317 17:49:08.427887 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.428148 kubelet[2846]: W0317 17:49:08.427901 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.428148 kubelet[2846]: E0317 17:49:08.427924 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.428438 kubelet[2846]: E0317 17:49:08.428418 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.428641 kubelet[2846]: W0317 17:49:08.428520 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.428641 kubelet[2846]: E0317 17:49:08.428559 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.428963 kubelet[2846]: E0317 17:49:08.428937 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.428963 kubelet[2846]: W0317 17:49:08.428960 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.429207 kubelet[2846]: E0317 17:49:08.429018 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.429319 kubelet[2846]: E0317 17:49:08.429300 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.429495 kubelet[2846]: W0317 17:49:08.429321 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.429495 kubelet[2846]: E0317 17:49:08.429446 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.429700 kubelet[2846]: E0317 17:49:08.429597 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.429700 kubelet[2846]: W0317 17:49:08.429611 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.429700 kubelet[2846]: E0317 17:49:08.429649 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.429959 kubelet[2846]: E0317 17:49:08.429859 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.429959 kubelet[2846]: W0317 17:49:08.429871 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.429959 kubelet[2846]: E0317 17:49:08.429886 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.430214 kubelet[2846]: E0317 17:49:08.430187 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.430214 kubelet[2846]: W0317 17:49:08.430210 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.430343 kubelet[2846]: E0317 17:49:08.430228 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.430536 kubelet[2846]: E0317 17:49:08.430507 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.430536 kubelet[2846]: W0317 17:49:08.430528 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.430885 kubelet[2846]: E0317 17:49:08.430552 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.431177 kubelet[2846]: E0317 17:49:08.430965 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.431177 kubelet[2846]: W0317 17:49:08.431009 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.431177 kubelet[2846]: E0317 17:49:08.431043 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.431443 kubelet[2846]: E0317 17:49:08.431420 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.431443 kubelet[2846]: W0317 17:49:08.431437 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.431562 kubelet[2846]: E0317 17:49:08.431456 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.432053 kubelet[2846]: E0317 17:49:08.431726 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.432053 kubelet[2846]: W0317 17:49:08.431749 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.432053 kubelet[2846]: E0317 17:49:08.431764 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.432260 kubelet[2846]: E0317 17:49:08.432243 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.432323 kubelet[2846]: W0317 17:49:08.432310 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.432413 kubelet[2846]: E0317 17:49:08.432401 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.432758 kubelet[2846]: E0317 17:49:08.432728 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.433028 kubelet[2846]: W0317 17:49:08.432894 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.433028 kubelet[2846]: E0317 17:49:08.432943 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.433221 kubelet[2846]: E0317 17:49:08.433185 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.433221 kubelet[2846]: W0317 17:49:08.433204 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.433221 kubelet[2846]: E0317 17:49:08.433218 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.433731 kubelet[2846]: E0317 17:49:08.433713 2846 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:08.433731 kubelet[2846]: W0317 17:49:08.433726 2846 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:08.433731 kubelet[2846]: E0317 17:49:08.433737 2846 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:08.562481 containerd[1502]: time="2025-03-17T17:49:08.562338455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:08.564749 containerd[1502]: time="2025-03-17T17:49:08.564235399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 17 17:49:08.566926 containerd[1502]: time="2025-03-17T17:49:08.565796259Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:08.570154 containerd[1502]: time="2025-03-17T17:49:08.570090634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:08.570896 containerd[1502]: time="2025-03-17T17:49:08.570804163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.739879733s" Mar 17 17:49:08.571054 containerd[1502]: time="2025-03-17T17:49:08.571021486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 17 17:49:08.577480 containerd[1502]: time="2025-03-17T17:49:08.577438088Z" level=info msg="CreateContainer within sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:49:08.596236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951547490.mount: Deactivated successfully. Mar 17 17:49:08.600136 containerd[1502]: time="2025-03-17T17:49:08.600071899Z" level=info msg="CreateContainer within sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\"" Mar 17 17:49:08.602269 containerd[1502]: time="2025-03-17T17:49:08.602242647Z" level=info msg="StartContainer for \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\"" Mar 17 17:49:08.632559 systemd[1]: Started cri-containerd-05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222.scope - libcontainer container 05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222. Mar 17 17:49:08.664109 containerd[1502]: time="2025-03-17T17:49:08.662819504Z" level=info msg="StartContainer for \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\" returns successfully" Mar 17 17:49:08.683164 systemd[1]: cri-containerd-05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222.scope: Deactivated successfully. Mar 17 17:49:08.796733 containerd[1502]: time="2025-03-17T17:49:08.796665021Z" level=info msg="shim disconnected" id=05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222 namespace=k8s.io Mar 17 17:49:08.796733 containerd[1502]: time="2025-03-17T17:49:08.796728902Z" level=warning msg="cleaning up after shim disconnected" id=05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222 namespace=k8s.io Mar 17 17:49:08.796733 containerd[1502]: time="2025-03-17T17:49:08.796742142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:08.836237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222-rootfs.mount: Deactivated successfully. Mar 17 17:49:09.181930 kubelet[2846]: E0317 17:49:09.181662 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:09.332220 containerd[1502]: time="2025-03-17T17:49:09.331749088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:49:11.181251 kubelet[2846]: E0317 17:49:11.181196 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:12.167586 containerd[1502]: time="2025-03-17T17:49:12.167517513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:12.168992 containerd[1502]: time="2025-03-17T17:49:12.168933574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 17 17:49:12.170098 containerd[1502]: time="2025-03-17T17:49:12.169779946Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:12.172744 containerd[1502]: time="2025-03-17T17:49:12.172711029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:12.173669 containerd[1502]: time="2025-03-17T17:49:12.173634163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 2.841828395s" Mar 17 17:49:12.173732 containerd[1502]: time="2025-03-17T17:49:12.173669043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 17 17:49:12.178154 containerd[1502]: time="2025-03-17T17:49:12.177920906Z" level=info msg="CreateContainer within sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:49:12.196093 containerd[1502]: time="2025-03-17T17:49:12.196032011Z" level=info msg="CreateContainer within sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\"" Mar 17 17:49:12.196852 containerd[1502]: time="2025-03-17T17:49:12.196830663Z" level=info msg="StartContainer for \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\"" Mar 17 17:49:12.232593 systemd[1]: Started cri-containerd-2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374.scope - libcontainer container 2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374. Mar 17 17:49:12.266042 containerd[1502]: time="2025-03-17T17:49:12.265901277Z" level=info msg="StartContainer for \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\" returns successfully" Mar 17 17:49:12.791943 containerd[1502]: time="2025-03-17T17:49:12.791888080Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:49:12.794869 systemd[1]: cri-containerd-2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374.scope: Deactivated successfully. Mar 17 17:49:12.795391 systemd[1]: cri-containerd-2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374.scope: Consumed 488ms CPU time, 173.5M memory peak, 150.3M written to disk. Mar 17 17:49:12.822692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374-rootfs.mount: Deactivated successfully. Mar 17 17:49:12.868486 kubelet[2846]: I0317 17:49:12.868457 2846 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:49:12.904886 kubelet[2846]: I0317 17:49:12.902924 2846 topology_manager.go:215] "Topology Admit Handler" podUID="18fbf695-ee31-4ad3-8e56-31fea597eadd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:12.906800 kubelet[2846]: I0317 17:49:12.905840 2846 topology_manager.go:215] "Topology Admit Handler" podUID="89e31e82-fcf0-4b12-9877-940dcbb04dfb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9n8th" Mar 17 17:49:12.910499 kubelet[2846]: I0317 17:49:12.909088 2846 topology_manager.go:215] "Topology Admit Handler" podUID="a74fbbc9-5937-415b-8d68-ed4ea0db44e4" podNamespace="calico-apiserver" podName="calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:12.920652 kubelet[2846]: I0317 17:49:12.918498 2846 topology_manager.go:215] "Topology Admit Handler" podUID="74b237a4-f5d5-48d3-8f38-13c8c4872091" podNamespace="calico-apiserver" podName="calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:12.925893 systemd[1]: Created slice kubepods-burstable-pod18fbf695_ee31_4ad3_8e56_31fea597eadd.slice - libcontainer container kubepods-burstable-pod18fbf695_ee31_4ad3_8e56_31fea597eadd.slice. Mar 17 17:49:12.928509 containerd[1502]: time="2025-03-17T17:49:12.927144906Z" level=info msg="shim disconnected" id=2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374 namespace=k8s.io Mar 17 17:49:12.928509 containerd[1502]: time="2025-03-17T17:49:12.927198787Z" level=warning msg="cleaning up after shim disconnected" id=2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374 namespace=k8s.io Mar 17 17:49:12.928509 containerd[1502]: time="2025-03-17T17:49:12.927206587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:12.936455 kubelet[2846]: I0317 17:49:12.936416 2846 topology_manager.go:215] "Topology Admit Handler" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" podNamespace="calico-system" podName="calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:12.942455 systemd[1]: Created slice kubepods-burstable-pod89e31e82_fcf0_4b12_9877_940dcbb04dfb.slice - libcontainer container kubepods-burstable-pod89e31e82_fcf0_4b12_9877_940dcbb04dfb.slice. Mar 17 17:49:12.955614 systemd[1]: Created slice kubepods-besteffort-pod447aab82_3c54_4fc9_a563_99b96e52f28a.slice - libcontainer container kubepods-besteffort-pod447aab82_3c54_4fc9_a563_99b96e52f28a.slice. Mar 17 17:49:12.963790 kubelet[2846]: I0317 17:49:12.962911 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a74fbbc9-5937-415b-8d68-ed4ea0db44e4-calico-apiserver-certs\") pod \"calico-apiserver-c5545ddd8-nd4n6\" (UID: \"a74fbbc9-5937-415b-8d68-ed4ea0db44e4\") " pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:12.964398 kubelet[2846]: I0317 17:49:12.963979 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blhkk\" (UniqueName: \"kubernetes.io/projected/a74fbbc9-5937-415b-8d68-ed4ea0db44e4-kube-api-access-blhkk\") pod \"calico-apiserver-c5545ddd8-nd4n6\" (UID: \"a74fbbc9-5937-415b-8d68-ed4ea0db44e4\") " pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:12.964398 kubelet[2846]: I0317 17:49:12.964147 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkp9d\" (UniqueName: \"kubernetes.io/projected/18fbf695-ee31-4ad3-8e56-31fea597eadd-kube-api-access-kkp9d\") pod \"coredns-7db6d8ff4d-5t9xs\" (UID: \"18fbf695-ee31-4ad3-8e56-31fea597eadd\") " pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:12.964398 kubelet[2846]: I0317 17:49:12.964170 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18fbf695-ee31-4ad3-8e56-31fea597eadd-config-volume\") pod \"coredns-7db6d8ff4d-5t9xs\" (UID: \"18fbf695-ee31-4ad3-8e56-31fea597eadd\") " pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:12.964398 kubelet[2846]: I0317 17:49:12.964225 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2pj6\" (UniqueName: \"kubernetes.io/projected/89e31e82-fcf0-4b12-9877-940dcbb04dfb-kube-api-access-t2pj6\") pod \"coredns-7db6d8ff4d-9n8th\" (UID: \"89e31e82-fcf0-4b12-9877-940dcbb04dfb\") " pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:12.964398 kubelet[2846]: I0317 17:49:12.964251 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e31e82-fcf0-4b12-9877-940dcbb04dfb-config-volume\") pod \"coredns-7db6d8ff4d-9n8th\" (UID: \"89e31e82-fcf0-4b12-9877-940dcbb04dfb\") " pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:12.972250 systemd[1]: Created slice kubepods-besteffort-poda74fbbc9_5937_415b_8d68_ed4ea0db44e4.slice - libcontainer container kubepods-besteffort-poda74fbbc9_5937_415b_8d68_ed4ea0db44e4.slice. Mar 17 17:49:12.980309 systemd[1]: Created slice kubepods-besteffort-pod74b237a4_f5d5_48d3_8f38_13c8c4872091.slice - libcontainer container kubepods-besteffort-pod74b237a4_f5d5_48d3_8f38_13c8c4872091.slice. Mar 17 17:49:13.067714 kubelet[2846]: I0317 17:49:13.064933 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74b237a4-f5d5-48d3-8f38-13c8c4872091-calico-apiserver-certs\") pod \"calico-apiserver-c5545ddd8-bblw2\" (UID: \"74b237a4-f5d5-48d3-8f38-13c8c4872091\") " pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:13.067714 kubelet[2846]: I0317 17:49:13.065049 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrbf\" (UniqueName: \"kubernetes.io/projected/74b237a4-f5d5-48d3-8f38-13c8c4872091-kube-api-access-djrbf\") pod \"calico-apiserver-c5545ddd8-bblw2\" (UID: \"74b237a4-f5d5-48d3-8f38-13c8c4872091\") " pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:13.067714 kubelet[2846]: I0317 17:49:13.065114 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447aab82-3c54-4fc9-a563-99b96e52f28a-tigera-ca-bundle\") pod \"calico-kube-controllers-6fb9b7ff55-f6s4k\" (UID: \"447aab82-3c54-4fc9-a563-99b96e52f28a\") " pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:13.067714 kubelet[2846]: I0317 17:49:13.065163 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w59j\" (UniqueName: \"kubernetes.io/projected/447aab82-3c54-4fc9-a563-99b96e52f28a-kube-api-access-5w59j\") pod \"calico-kube-controllers-6fb9b7ff55-f6s4k\" (UID: \"447aab82-3c54-4fc9-a563-99b96e52f28a\") " pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:13.188276 systemd[1]: Created slice kubepods-besteffort-pod37c19387_6a1a_435e_b624_cd3e3f772523.slice - libcontainer container kubepods-besteffort-pod37c19387_6a1a_435e_b624_cd3e3f772523.slice. Mar 17 17:49:13.205098 containerd[1502]: time="2025-03-17T17:49:13.204560466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:13.247402 containerd[1502]: time="2025-03-17T17:49:13.247202190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:0,}" Mar 17 17:49:13.250802 containerd[1502]: time="2025-03-17T17:49:13.250505560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:0,}" Mar 17 17:49:13.272379 containerd[1502]: time="2025-03-17T17:49:13.270189417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:13.281332 containerd[1502]: time="2025-03-17T17:49:13.281228584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:49:13.289757 containerd[1502]: time="2025-03-17T17:49:13.289682672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:49:13.311110 containerd[1502]: time="2025-03-17T17:49:13.310975314Z" level=error msg="Failed to destroy network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.312206 containerd[1502]: time="2025-03-17T17:49:13.312171172Z" level=error msg="encountered an error cleaning up failed sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.312703 containerd[1502]: time="2025-03-17T17:49:13.312335214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.312772 kubelet[2846]: E0317 17:49:13.312587 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.312772 kubelet[2846]: E0317 17:49:13.312658 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:13.312772 kubelet[2846]: E0317 17:49:13.312676 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:13.312966 kubelet[2846]: E0317 17:49:13.312930 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:13.354315 containerd[1502]: time="2025-03-17T17:49:13.352634303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:49:13.361546 kubelet[2846]: I0317 17:49:13.361509 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2" Mar 17 17:49:13.364091 containerd[1502]: time="2025-03-17T17:49:13.363327905Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:13.367378 containerd[1502]: time="2025-03-17T17:49:13.365568938Z" level=info msg="Ensure that sandbox 8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2 in task-service has been cleanup successfully" Mar 17 17:49:13.370667 containerd[1502]: time="2025-03-17T17:49:13.370460172Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:13.370667 containerd[1502]: time="2025-03-17T17:49:13.370513173Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:13.383384 containerd[1502]: time="2025-03-17T17:49:13.380145319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:1,}" Mar 17 17:49:13.475908 containerd[1502]: time="2025-03-17T17:49:13.475847725Z" level=error msg="Failed to destroy network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.478653 containerd[1502]: time="2025-03-17T17:49:13.478606486Z" level=error msg="encountered an error cleaning up failed sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.478884 containerd[1502]: time="2025-03-17T17:49:13.478818810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.480272 kubelet[2846]: E0317 17:49:13.480231 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.480385 kubelet[2846]: E0317 17:49:13.480288 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:13.480385 kubelet[2846]: E0317 17:49:13.480310 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:13.480385 kubelet[2846]: E0317 17:49:13.480362 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5t9xs" podUID="18fbf695-ee31-4ad3-8e56-31fea597eadd" Mar 17 17:49:13.518693 containerd[1502]: time="2025-03-17T17:49:13.518535450Z" level=error msg="Failed to destroy network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.520098 containerd[1502]: time="2025-03-17T17:49:13.520057633Z" level=error msg="encountered an error cleaning up failed sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.520258 containerd[1502]: time="2025-03-17T17:49:13.520237195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.521625 kubelet[2846]: E0317 17:49:13.520570 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.521625 kubelet[2846]: E0317 17:49:13.520622 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:13.521625 kubelet[2846]: E0317 17:49:13.520644 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:13.521768 kubelet[2846]: E0317 17:49:13.520682 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" podUID="a74fbbc9-5937-415b-8d68-ed4ea0db44e4" Mar 17 17:49:13.527590 containerd[1502]: time="2025-03-17T17:49:13.527457424Z" level=error msg="Failed to destroy network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.528014 containerd[1502]: time="2025-03-17T17:49:13.527986712Z" level=error msg="encountered an error cleaning up failed sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.528268 containerd[1502]: time="2025-03-17T17:49:13.528168075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.528427 kubelet[2846]: E0317 17:49:13.528376 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.528479 kubelet[2846]: E0317 17:49:13.528432 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:13.528479 kubelet[2846]: E0317 17:49:13.528451 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:13.528526 kubelet[2846]: E0317 17:49:13.528486 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" Mar 17 17:49:13.530597 containerd[1502]: time="2025-03-17T17:49:13.529829500Z" level=error msg="Failed to destroy network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.530597 containerd[1502]: time="2025-03-17T17:49:13.530182466Z" level=error msg="encountered an error cleaning up failed sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.530597 containerd[1502]: time="2025-03-17T17:49:13.530234986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.531105 kubelet[2846]: E0317 17:49:13.530915 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.531105 kubelet[2846]: E0317 17:49:13.530971 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:13.531105 kubelet[2846]: E0317 17:49:13.530988 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:13.531223 kubelet[2846]: E0317 17:49:13.531043 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9n8th" podUID="89e31e82-fcf0-4b12-9877-940dcbb04dfb" Mar 17 17:49:13.540221 containerd[1502]: time="2025-03-17T17:49:13.539981814Z" level=error msg="Failed to destroy network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.541505 containerd[1502]: time="2025-03-17T17:49:13.541454676Z" level=error msg="encountered an error cleaning up failed sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.541743 containerd[1502]: time="2025-03-17T17:49:13.541714360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.542189 kubelet[2846]: E0317 17:49:13.542142 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.542282 kubelet[2846]: E0317 17:49:13.542197 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:13.542282 kubelet[2846]: E0317 17:49:13.542219 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:13.542391 kubelet[2846]: E0317 17:49:13.542275 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" podUID="74b237a4-f5d5-48d3-8f38-13c8c4872091" Mar 17 17:49:13.556977 containerd[1502]: time="2025-03-17T17:49:13.556407302Z" level=error msg="Failed to destroy network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.556977 containerd[1502]: time="2025-03-17T17:49:13.556765427Z" level=error msg="encountered an error cleaning up failed sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.556977 containerd[1502]: time="2025-03-17T17:49:13.556837148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.558961 kubelet[2846]: E0317 17:49:13.557077 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:13.558961 kubelet[2846]: E0317 17:49:13.557131 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:13.558961 kubelet[2846]: E0317 17:49:13.557151 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:13.559093 kubelet[2846]: E0317 17:49:13.557195 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:14.196041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350-shm.mount: Deactivated successfully. Mar 17 17:49:14.196647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a-shm.mount: Deactivated successfully. Mar 17 17:49:14.197051 systemd[1]: run-netns-cni\x2d38a9aa1e\x2d01c2\x2d0797\x2d91e8\x2dd70449a4f7c4.mount: Deactivated successfully. Mar 17 17:49:14.197115 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2-shm.mount: Deactivated successfully. Mar 17 17:49:14.365680 kubelet[2846]: I0317 17:49:14.365614 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350" Mar 17 17:49:14.370388 containerd[1502]: time="2025-03-17T17:49:14.367851514Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:14.370388 containerd[1502]: time="2025-03-17T17:49:14.368052317Z" level=info msg="Ensure that sandbox 7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350 in task-service has been cleanup successfully" Mar 17 17:49:14.370162 systemd[1]: run-netns-cni\x2dfe9f238e\x2d9582\x2d8b21\x2df736\x2dfc95b3173117.mount: Deactivated successfully. Mar 17 17:49:14.372911 containerd[1502]: time="2025-03-17T17:49:14.371774455Z" level=info msg="TearDown network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" successfully" Mar 17 17:49:14.372911 containerd[1502]: time="2025-03-17T17:49:14.371808015Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" returns successfully" Mar 17 17:49:14.373813 kubelet[2846]: I0317 17:49:14.372047 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a" Mar 17 17:49:14.374864 containerd[1502]: time="2025-03-17T17:49:14.374458577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:1,}" Mar 17 17:49:14.375790 containerd[1502]: time="2025-03-17T17:49:14.375740196Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:14.376663 containerd[1502]: time="2025-03-17T17:49:14.376350206Z" level=info msg="Ensure that sandbox 5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a in task-service has been cleanup successfully" Mar 17 17:49:14.377116 containerd[1502]: time="2025-03-17T17:49:14.377014576Z" level=info msg="TearDown network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" successfully" Mar 17 17:49:14.377116 containerd[1502]: time="2025-03-17T17:49:14.377083297Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" returns successfully" Mar 17 17:49:14.379514 kubelet[2846]: I0317 17:49:14.379377 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90" Mar 17 17:49:14.379799 systemd[1]: run-netns-cni\x2d6d70a8dd\x2db264\x2dd1f5\x2dd485\x2d3c9a88d02e55.mount: Deactivated successfully. Mar 17 17:49:14.381335 containerd[1502]: time="2025-03-17T17:49:14.380720714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:1,}" Mar 17 17:49:14.381603 containerd[1502]: time="2025-03-17T17:49:14.381538006Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:14.382369 containerd[1502]: time="2025-03-17T17:49:14.382114935Z" level=info msg="Ensure that sandbox 9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90 in task-service has been cleanup successfully" Mar 17 17:49:14.383204 containerd[1502]: time="2025-03-17T17:49:14.382431780Z" level=info msg="TearDown network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" successfully" Mar 17 17:49:14.383204 containerd[1502]: time="2025-03-17T17:49:14.382447821Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" returns successfully" Mar 17 17:49:14.385781 containerd[1502]: time="2025-03-17T17:49:14.385738992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:49:14.387721 systemd[1]: run-netns-cni\x2d0558b61c\x2d98c5\x2dae3b\x2d8b88\x2d3fb9c42179e4.mount: Deactivated successfully. Mar 17 17:49:14.390483 kubelet[2846]: I0317 17:49:14.390448 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268" Mar 17 17:49:14.391463 containerd[1502]: time="2025-03-17T17:49:14.391288998Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:14.392409 containerd[1502]: time="2025-03-17T17:49:14.392111131Z" level=info msg="Ensure that sandbox ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268 in task-service has been cleanup successfully" Mar 17 17:49:14.396329 containerd[1502]: time="2025-03-17T17:49:14.392920783Z" level=info msg="TearDown network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" successfully" Mar 17 17:49:14.396329 containerd[1502]: time="2025-03-17T17:49:14.392951744Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" returns successfully" Mar 17 17:49:14.397750 containerd[1502]: time="2025-03-17T17:49:14.397721938Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:14.397889 systemd[1]: run-netns-cni\x2d41955ef9\x2da2ce\x2d5abf\x2d360a\x2da20a29e0b014.mount: Deactivated successfully. Mar 17 17:49:14.400476 containerd[1502]: time="2025-03-17T17:49:14.398897596Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:14.400476 containerd[1502]: time="2025-03-17T17:49:14.398921396Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:14.401384 containerd[1502]: time="2025-03-17T17:49:14.401267953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:2,}" Mar 17 17:49:14.402441 kubelet[2846]: I0317 17:49:14.402297 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107" Mar 17 17:49:14.405633 containerd[1502]: time="2025-03-17T17:49:14.403723871Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:14.405633 containerd[1502]: time="2025-03-17T17:49:14.403864393Z" level=info msg="Ensure that sandbox 1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107 in task-service has been cleanup successfully" Mar 17 17:49:14.405633 containerd[1502]: time="2025-03-17T17:49:14.404436522Z" level=info msg="TearDown network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" successfully" Mar 17 17:49:14.405633 containerd[1502]: time="2025-03-17T17:49:14.404455922Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" returns successfully" Mar 17 17:49:14.406138 kubelet[2846]: I0317 17:49:14.406001 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115" Mar 17 17:49:14.406980 containerd[1502]: time="2025-03-17T17:49:14.406945161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:49:14.407647 containerd[1502]: time="2025-03-17T17:49:14.407414728Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:14.409972 containerd[1502]: time="2025-03-17T17:49:14.408997073Z" level=info msg="Ensure that sandbox 0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115 in task-service has been cleanup successfully" Mar 17 17:49:14.409972 containerd[1502]: time="2025-03-17T17:49:14.409202236Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:14.409972 containerd[1502]: time="2025-03-17T17:49:14.409216116Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:14.414843 containerd[1502]: time="2025-03-17T17:49:14.414569799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:1,}" Mar 17 17:49:14.512565 containerd[1502]: time="2025-03-17T17:49:14.512499639Z" level=error msg="Failed to destroy network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.513925 containerd[1502]: time="2025-03-17T17:49:14.512876885Z" level=error msg="encountered an error cleaning up failed sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.513925 containerd[1502]: time="2025-03-17T17:49:14.512931846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.514993 kubelet[2846]: E0317 17:49:14.514609 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.515428 kubelet[2846]: E0317 17:49:14.515305 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:14.515641 kubelet[2846]: E0317 17:49:14.515580 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:14.517218 kubelet[2846]: E0317 17:49:14.515863 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9n8th" podUID="89e31e82-fcf0-4b12-9877-940dcbb04dfb" Mar 17 17:49:14.576073 containerd[1502]: time="2025-03-17T17:49:14.576004105Z" level=error msg="Failed to destroy network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.577147 containerd[1502]: time="2025-03-17T17:49:14.576963960Z" level=error msg="encountered an error cleaning up failed sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.577147 containerd[1502]: time="2025-03-17T17:49:14.577043881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.577341 kubelet[2846]: E0317 17:49:14.577273 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.577341 kubelet[2846]: E0317 17:49:14.577333 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:14.577464 kubelet[2846]: E0317 17:49:14.577364 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:14.577464 kubelet[2846]: E0317 17:49:14.577419 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" podUID="a74fbbc9-5937-415b-8d68-ed4ea0db44e4" Mar 17 17:49:14.593707 containerd[1502]: time="2025-03-17T17:49:14.593575978Z" level=error msg="Failed to destroy network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.594495 containerd[1502]: time="2025-03-17T17:49:14.594460432Z" level=error msg="encountered an error cleaning up failed sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.594662 containerd[1502]: time="2025-03-17T17:49:14.594567673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.595610 kubelet[2846]: E0317 17:49:14.595343 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.595793 kubelet[2846]: E0317 17:49:14.595587 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:14.595793 kubelet[2846]: E0317 17:49:14.595720 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:14.596099 kubelet[2846]: E0317 17:49:14.595972 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" Mar 17 17:49:14.636071 containerd[1502]: time="2025-03-17T17:49:14.635993396Z" level=error msg="Failed to destroy network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.636785 containerd[1502]: time="2025-03-17T17:49:14.636617446Z" level=error msg="encountered an error cleaning up failed sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.636785 containerd[1502]: time="2025-03-17T17:49:14.636681527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.636931 kubelet[2846]: E0317 17:49:14.636902 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.637002 kubelet[2846]: E0317 17:49:14.636954 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:14.637002 kubelet[2846]: E0317 17:49:14.636974 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:14.637144 kubelet[2846]: E0317 17:49:14.637024 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:14.644729 containerd[1502]: time="2025-03-17T17:49:14.644678811Z" level=error msg="Failed to destroy network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.645076 containerd[1502]: time="2025-03-17T17:49:14.645019856Z" level=error msg="encountered an error cleaning up failed sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.645172 containerd[1502]: time="2025-03-17T17:49:14.645100658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.645368 kubelet[2846]: E0317 17:49:14.645304 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.645420 kubelet[2846]: E0317 17:49:14.645394 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:14.645420 kubelet[2846]: E0317 17:49:14.645413 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:14.645524 kubelet[2846]: E0317 17:49:14.645460 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5t9xs" podUID="18fbf695-ee31-4ad3-8e56-31fea597eadd" Mar 17 17:49:14.646306 containerd[1502]: time="2025-03-17T17:49:14.646084473Z" level=error msg="Failed to destroy network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.647073 containerd[1502]: time="2025-03-17T17:49:14.646766004Z" level=error msg="encountered an error cleaning up failed sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.647293 containerd[1502]: time="2025-03-17T17:49:14.647188050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.647837 kubelet[2846]: E0317 17:49:14.647799 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:14.648064 kubelet[2846]: E0317 17:49:14.647869 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:14.648064 kubelet[2846]: E0317 17:49:14.647894 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:14.648064 kubelet[2846]: E0317 17:49:14.647942 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" podUID="74b237a4-f5d5-48d3-8f38-13c8c4872091" Mar 17 17:49:15.199346 systemd[1]: run-netns-cni\x2d5e04617a\x2dc571\x2d9778\x2dc5e4\x2d3525200ed1cc.mount: Deactivated successfully. Mar 17 17:49:15.200369 systemd[1]: run-netns-cni\x2df24016ab\x2dad3a\x2de66b\x2db877\x2d3133bd5dcfec.mount: Deactivated successfully. Mar 17 17:49:15.403134 kubelet[2846]: I0317 17:49:15.402859 2846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:15.413643 kubelet[2846]: I0317 17:49:15.412713 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1" Mar 17 17:49:15.414431 containerd[1502]: time="2025-03-17T17:49:15.413926597Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" Mar 17 17:49:15.414431 containerd[1502]: time="2025-03-17T17:49:15.414134361Z" level=info msg="Ensure that sandbox d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1 in task-service has been cleanup successfully" Mar 17 17:49:15.417986 systemd[1]: run-netns-cni\x2d36391910\x2d25db\x2deee0\x2dae3c\x2d6afa61de7ac8.mount: Deactivated successfully. Mar 17 17:49:15.419016 containerd[1502]: time="2025-03-17T17:49:15.418689593Z" level=info msg="TearDown network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" successfully" Mar 17 17:49:15.419016 containerd[1502]: time="2025-03-17T17:49:15.418842676Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" returns successfully" Mar 17 17:49:15.419089 kubelet[2846]: I0317 17:49:15.418302 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e" Mar 17 17:49:15.422525 containerd[1502]: time="2025-03-17T17:49:15.420547503Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" Mar 17 17:49:15.422525 containerd[1502]: time="2025-03-17T17:49:15.420734906Z" level=info msg="Ensure that sandbox cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e in task-service has been cleanup successfully" Mar 17 17:49:15.422904 containerd[1502]: time="2025-03-17T17:49:15.422872100Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:15.423185 containerd[1502]: time="2025-03-17T17:49:15.423163705Z" level=info msg="TearDown network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" successfully" Mar 17 17:49:15.423270 containerd[1502]: time="2025-03-17T17:49:15.423250666Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" returns successfully" Mar 17 17:49:15.424090 systemd[1]: run-netns-cni\x2dbdb2f07d\x2df127\x2d6c83\x2d64df\x2da7025d857851.mount: Deactivated successfully. Mar 17 17:49:15.425154 containerd[1502]: time="2025-03-17T17:49:15.424763810Z" level=info msg="TearDown network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" successfully" Mar 17 17:49:15.425154 containerd[1502]: time="2025-03-17T17:49:15.424788650Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" returns successfully" Mar 17 17:49:15.427419 containerd[1502]: time="2025-03-17T17:49:15.425910308Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:15.427419 containerd[1502]: time="2025-03-17T17:49:15.426011110Z" level=info msg="TearDown network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" successfully" Mar 17 17:49:15.427419 containerd[1502]: time="2025-03-17T17:49:15.426020910Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" returns successfully" Mar 17 17:49:15.427419 containerd[1502]: time="2025-03-17T17:49:15.426148952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:2,}" Mar 17 17:49:15.428179 containerd[1502]: time="2025-03-17T17:49:15.428132344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:2,}" Mar 17 17:49:15.429619 kubelet[2846]: I0317 17:49:15.429587 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061" Mar 17 17:49:15.430687 containerd[1502]: time="2025-03-17T17:49:15.430652304Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" Mar 17 17:49:15.431174 containerd[1502]: time="2025-03-17T17:49:15.431132991Z" level=info msg="Ensure that sandbox c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061 in task-service has been cleanup successfully" Mar 17 17:49:15.436828 systemd[1]: run-netns-cni\x2db78b0a79\x2d0944\x2d5e35\x2d7f70\x2d44a76d0f016c.mount: Deactivated successfully. Mar 17 17:49:15.441802 containerd[1502]: time="2025-03-17T17:49:15.441730880Z" level=info msg="TearDown network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" successfully" Mar 17 17:49:15.441802 containerd[1502]: time="2025-03-17T17:49:15.441769761Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" returns successfully" Mar 17 17:49:15.446980 containerd[1502]: time="2025-03-17T17:49:15.446949123Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:15.448569 containerd[1502]: time="2025-03-17T17:49:15.447665295Z" level=info msg="TearDown network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" successfully" Mar 17 17:49:15.448569 containerd[1502]: time="2025-03-17T17:49:15.447685335Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" returns successfully" Mar 17 17:49:15.449363 containerd[1502]: time="2025-03-17T17:49:15.449216879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:49:15.455536 kubelet[2846]: I0317 17:49:15.454236 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7" Mar 17 17:49:15.456177 containerd[1502]: time="2025-03-17T17:49:15.455775384Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" Mar 17 17:49:15.456177 containerd[1502]: time="2025-03-17T17:49:15.455970067Z" level=info msg="Ensure that sandbox b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7 in task-service has been cleanup successfully" Mar 17 17:49:15.463626 systemd[1]: run-netns-cni\x2d65e4e763\x2d1c27\x2df959\x2d35e0\x2d192e99f3a4df.mount: Deactivated successfully. Mar 17 17:49:15.469265 containerd[1502]: time="2025-03-17T17:49:15.468992714Z" level=info msg="TearDown network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" successfully" Mar 17 17:49:15.470190 containerd[1502]: time="2025-03-17T17:49:15.470147933Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" returns successfully" Mar 17 17:49:15.473633 kubelet[2846]: I0317 17:49:15.473602 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077" Mar 17 17:49:15.482413 containerd[1502]: time="2025-03-17T17:49:15.481827039Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" Mar 17 17:49:15.483916 containerd[1502]: time="2025-03-17T17:49:15.482808374Z" level=info msg="Ensure that sandbox be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077 in task-service has been cleanup successfully" Mar 17 17:49:15.484075 containerd[1502]: time="2025-03-17T17:49:15.484017234Z" level=info msg="TearDown network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" successfully" Mar 17 17:49:15.484184 containerd[1502]: time="2025-03-17T17:49:15.484158636Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" returns successfully" Mar 17 17:49:15.485382 containerd[1502]: time="2025-03-17T17:49:15.484718445Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:15.486253 containerd[1502]: time="2025-03-17T17:49:15.485740381Z" level=info msg="TearDown network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" successfully" Mar 17 17:49:15.486463 containerd[1502]: time="2025-03-17T17:49:15.486430752Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" returns successfully" Mar 17 17:49:15.488263 containerd[1502]: time="2025-03-17T17:49:15.488218020Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:15.488511 containerd[1502]: time="2025-03-17T17:49:15.488494185Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:15.488608 containerd[1502]: time="2025-03-17T17:49:15.488572186Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:15.488785 containerd[1502]: time="2025-03-17T17:49:15.488751789Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:15.489496 containerd[1502]: time="2025-03-17T17:49:15.488925152Z" level=info msg="TearDown network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" successfully" Mar 17 17:49:15.489496 containerd[1502]: time="2025-03-17T17:49:15.489468800Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" returns successfully" Mar 17 17:49:15.495666 containerd[1502]: time="2025-03-17T17:49:15.495567777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:3,}" Mar 17 17:49:15.496539 containerd[1502]: time="2025-03-17T17:49:15.496145227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:49:15.499526 kubelet[2846]: I0317 17:49:15.499500 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c" Mar 17 17:49:15.506024 containerd[1502]: time="2025-03-17T17:49:15.505252692Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:15.506024 containerd[1502]: time="2025-03-17T17:49:15.505468455Z" level=info msg="Ensure that sandbox aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c in task-service has been cleanup successfully" Mar 17 17:49:15.506024 containerd[1502]: time="2025-03-17T17:49:15.505637738Z" level=info msg="TearDown network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" successfully" Mar 17 17:49:15.506024 containerd[1502]: time="2025-03-17T17:49:15.505652738Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" returns successfully" Mar 17 17:49:15.513617 containerd[1502]: time="2025-03-17T17:49:15.513581144Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:15.515008 containerd[1502]: time="2025-03-17T17:49:15.514979767Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:15.515846 containerd[1502]: time="2025-03-17T17:49:15.515823260Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:15.516905 containerd[1502]: time="2025-03-17T17:49:15.516877837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:2,}" Mar 17 17:49:15.716316 containerd[1502]: time="2025-03-17T17:49:15.715587001Z" level=error msg="Failed to destroy network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.717095 containerd[1502]: time="2025-03-17T17:49:15.717062744Z" level=error msg="encountered an error cleaning up failed sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.717233 containerd[1502]: time="2025-03-17T17:49:15.717213507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.717571 kubelet[2846]: E0317 17:49:15.717537 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.717820 kubelet[2846]: E0317 17:49:15.717777 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:15.717950 kubelet[2846]: E0317 17:49:15.717806 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:15.718055 kubelet[2846]: E0317 17:49:15.718017 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5t9xs" podUID="18fbf695-ee31-4ad3-8e56-31fea597eadd" Mar 17 17:49:15.752380 containerd[1502]: time="2025-03-17T17:49:15.752084022Z" level=error msg="Failed to destroy network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.753317 containerd[1502]: time="2025-03-17T17:49:15.753180800Z" level=error msg="encountered an error cleaning up failed sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.753504 containerd[1502]: time="2025-03-17T17:49:15.753481844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.754193 kubelet[2846]: E0317 17:49:15.754139 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.754287 kubelet[2846]: E0317 17:49:15.754199 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:15.754287 kubelet[2846]: E0317 17:49:15.754219 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:15.754489 kubelet[2846]: E0317 17:49:15.754278 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9n8th" podUID="89e31e82-fcf0-4b12-9877-940dcbb04dfb" Mar 17 17:49:15.770642 containerd[1502]: time="2025-03-17T17:49:15.770509156Z" level=error msg="Failed to destroy network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.771095 containerd[1502]: time="2025-03-17T17:49:15.770963923Z" level=error msg="encountered an error cleaning up failed sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.771095 containerd[1502]: time="2025-03-17T17:49:15.771024284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.771649 kubelet[2846]: E0317 17:49:15.771602 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.771738 kubelet[2846]: E0317 17:49:15.771665 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:15.771738 kubelet[2846]: E0317 17:49:15.771686 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:15.771738 kubelet[2846]: E0317 17:49:15.771723 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" podUID="74b237a4-f5d5-48d3-8f38-13c8c4872091" Mar 17 17:49:15.787806 containerd[1502]: time="2025-03-17T17:49:15.787539587Z" level=error msg="Failed to destroy network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.788188 containerd[1502]: time="2025-03-17T17:49:15.788156037Z" level=error msg="encountered an error cleaning up failed sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.788300 containerd[1502]: time="2025-03-17T17:49:15.788282399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.788643 kubelet[2846]: E0317 17:49:15.788605 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.788730 kubelet[2846]: E0317 17:49:15.788661 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:15.788730 kubelet[2846]: E0317 17:49:15.788679 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:15.788730 kubelet[2846]: E0317 17:49:15.788716 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:15.797823 containerd[1502]: time="2025-03-17T17:49:15.797774310Z" level=error msg="Failed to destroy network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.798191 containerd[1502]: time="2025-03-17T17:49:15.798154596Z" level=error msg="encountered an error cleaning up failed sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.798247 containerd[1502]: time="2025-03-17T17:49:15.798228117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.798536 kubelet[2846]: E0317 17:49:15.798504 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.799467 kubelet[2846]: E0317 17:49:15.799159 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:15.799467 kubelet[2846]: E0317 17:49:15.799184 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:15.799467 kubelet[2846]: E0317 17:49:15.799238 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" Mar 17 17:49:15.802582 containerd[1502]: time="2025-03-17T17:49:15.802429224Z" level=error msg="Failed to destroy network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.803643 containerd[1502]: time="2025-03-17T17:49:15.803433320Z" level=error msg="encountered an error cleaning up failed sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.803877 containerd[1502]: time="2025-03-17T17:49:15.803696124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.804999 kubelet[2846]: E0317 17:49:15.804942 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:15.805472 kubelet[2846]: E0317 17:49:15.805121 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:15.805472 kubelet[2846]: E0317 17:49:15.805148 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:15.805928 kubelet[2846]: E0317 17:49:15.805714 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" podUID="a74fbbc9-5937-415b-8d68-ed4ea0db44e4" Mar 17 17:49:16.197037 systemd[1]: run-netns-cni\x2dbb540a3a\x2d4824\x2d98da\x2ddb1c\x2d89157deafcdd.mount: Deactivated successfully. Mar 17 17:49:16.198713 systemd[1]: run-netns-cni\x2dd190f017\x2df5cb\x2d7e1f\x2d45b5\x2dae62dee013bf.mount: Deactivated successfully. Mar 17 17:49:16.506001 kubelet[2846]: I0317 17:49:16.505958 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727" Mar 17 17:49:16.510215 containerd[1502]: time="2025-03-17T17:49:16.507407286Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\"" Mar 17 17:49:16.510215 containerd[1502]: time="2025-03-17T17:49:16.507770852Z" level=info msg="Ensure that sandbox deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727 in task-service has been cleanup successfully" Mar 17 17:49:16.510215 containerd[1502]: time="2025-03-17T17:49:16.507972855Z" level=info msg="TearDown network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" successfully" Mar 17 17:49:16.510215 containerd[1502]: time="2025-03-17T17:49:16.507997296Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" returns successfully" Mar 17 17:49:16.515730 containerd[1502]: time="2025-03-17T17:49:16.514151356Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" Mar 17 17:49:16.515730 containerd[1502]: time="2025-03-17T17:49:16.514268038Z" level=info msg="TearDown network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" successfully" Mar 17 17:49:16.515730 containerd[1502]: time="2025-03-17T17:49:16.514278478Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" returns successfully" Mar 17 17:49:16.515258 systemd[1]: run-netns-cni\x2decce31c5\x2d2e87\x2d6713\x2d6d4f\x2d89eb692eb196.mount: Deactivated successfully. Mar 17 17:49:16.516107 containerd[1502]: time="2025-03-17T17:49:16.516004946Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:16.516136 containerd[1502]: time="2025-03-17T17:49:16.516108828Z" level=info msg="TearDown network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" successfully" Mar 17 17:49:16.516136 containerd[1502]: time="2025-03-17T17:49:16.516119588Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" returns successfully" Mar 17 17:49:16.517443 containerd[1502]: time="2025-03-17T17:49:16.517219766Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:16.517443 containerd[1502]: time="2025-03-17T17:49:16.517309488Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:16.517443 containerd[1502]: time="2025-03-17T17:49:16.517320008Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:16.518027 containerd[1502]: time="2025-03-17T17:49:16.518001419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:4,}" Mar 17 17:49:16.518754 kubelet[2846]: I0317 17:49:16.518529 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c" Mar 17 17:49:16.520933 containerd[1502]: time="2025-03-17T17:49:16.519944891Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\"" Mar 17 17:49:16.520933 containerd[1502]: time="2025-03-17T17:49:16.520123173Z" level=info msg="Ensure that sandbox 6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c in task-service has been cleanup successfully" Mar 17 17:49:16.521591 containerd[1502]: time="2025-03-17T17:49:16.521246752Z" level=info msg="TearDown network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" successfully" Mar 17 17:49:16.526194 containerd[1502]: time="2025-03-17T17:49:16.522031325Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" returns successfully" Mar 17 17:49:16.526194 containerd[1502]: time="2025-03-17T17:49:16.523547549Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" Mar 17 17:49:16.526194 containerd[1502]: time="2025-03-17T17:49:16.523637831Z" level=info msg="TearDown network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" successfully" Mar 17 17:49:16.526194 containerd[1502]: time="2025-03-17T17:49:16.523648471Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" returns successfully" Mar 17 17:49:16.524258 systemd[1]: run-netns-cni\x2d2c477e91\x2daaa8\x2d665d\x2db599\x2dc41bb5e964c0.mount: Deactivated successfully. Mar 17 17:49:16.528060 kubelet[2846]: I0317 17:49:16.526869 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42" Mar 17 17:49:16.530115 containerd[1502]: time="2025-03-17T17:49:16.530030415Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:16.530205 containerd[1502]: time="2025-03-17T17:49:16.530179417Z" level=info msg="TearDown network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" successfully" Mar 17 17:49:16.530205 containerd[1502]: time="2025-03-17T17:49:16.530191578Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" returns successfully" Mar 17 17:49:16.533170 containerd[1502]: time="2025-03-17T17:49:16.530553744Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" Mar 17 17:49:16.533170 containerd[1502]: time="2025-03-17T17:49:16.530724786Z" level=info msg="Ensure that sandbox e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42 in task-service has been cleanup successfully" Mar 17 17:49:16.533170 containerd[1502]: time="2025-03-17T17:49:16.531104473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:49:16.532954 systemd[1]: run-netns-cni\x2d6d8393c9\x2d2986\x2dc672\x2df456\x2daddef1f7a6ed.mount: Deactivated successfully. Mar 17 17:49:16.536264 containerd[1502]: time="2025-03-17T17:49:16.535994792Z" level=info msg="TearDown network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" successfully" Mar 17 17:49:16.537087 containerd[1502]: time="2025-03-17T17:49:16.537022969Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" returns successfully" Mar 17 17:49:16.539816 containerd[1502]: time="2025-03-17T17:49:16.539605531Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:16.539816 containerd[1502]: time="2025-03-17T17:49:16.539709733Z" level=info msg="TearDown network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" successfully" Mar 17 17:49:16.539816 containerd[1502]: time="2025-03-17T17:49:16.539721173Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" returns successfully" Mar 17 17:49:16.540991 containerd[1502]: time="2025-03-17T17:49:16.540869312Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:16.541182 containerd[1502]: time="2025-03-17T17:49:16.541123996Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:16.541182 containerd[1502]: time="2025-03-17T17:49:16.541143276Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:16.541916 kubelet[2846]: I0317 17:49:16.541806 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd" Mar 17 17:49:16.542974 containerd[1502]: time="2025-03-17T17:49:16.542391337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:3,}" Mar 17 17:49:16.544972 containerd[1502]: time="2025-03-17T17:49:16.544931458Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\"" Mar 17 17:49:16.545895 containerd[1502]: time="2025-03-17T17:49:16.545849873Z" level=info msg="Ensure that sandbox aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd in task-service has been cleanup successfully" Mar 17 17:49:16.548570 containerd[1502]: time="2025-03-17T17:49:16.548457996Z" level=info msg="TearDown network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" successfully" Mar 17 17:49:16.548570 containerd[1502]: time="2025-03-17T17:49:16.548518717Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" returns successfully" Mar 17 17:49:16.549453 containerd[1502]: time="2025-03-17T17:49:16.549337250Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" Mar 17 17:49:16.549582 containerd[1502]: time="2025-03-17T17:49:16.549563054Z" level=info msg="TearDown network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" successfully" Mar 17 17:49:16.549716 containerd[1502]: time="2025-03-17T17:49:16.549628175Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" returns successfully" Mar 17 17:49:16.550619 kubelet[2846]: I0317 17:49:16.550472 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a" Mar 17 17:49:16.553201 containerd[1502]: time="2025-03-17T17:49:16.553164352Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\"" Mar 17 17:49:16.554003 systemd[1]: run-netns-cni\x2d112985ad\x2d2a85\x2dd5fb\x2d7ed2\x2d008751b24c38.mount: Deactivated successfully. Mar 17 17:49:16.557106 containerd[1502]: time="2025-03-17T17:49:16.556964774Z" level=info msg="Ensure that sandbox 5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a in task-service has been cleanup successfully" Mar 17 17:49:16.557776 containerd[1502]: time="2025-03-17T17:49:16.553892924Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:16.557981 containerd[1502]: time="2025-03-17T17:49:16.557956791Z" level=info msg="TearDown network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" successfully" Mar 17 17:49:16.557981 containerd[1502]: time="2025-03-17T17:49:16.557977031Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" returns successfully" Mar 17 17:49:16.558398 containerd[1502]: time="2025-03-17T17:49:16.558331917Z" level=info msg="TearDown network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" successfully" Mar 17 17:49:16.558398 containerd[1502]: time="2025-03-17T17:49:16.558375277Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" returns successfully" Mar 17 17:49:16.559694 containerd[1502]: time="2025-03-17T17:49:16.559666258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:3,}" Mar 17 17:49:16.560062 containerd[1502]: time="2025-03-17T17:49:16.559970823Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" Mar 17 17:49:16.560112 containerd[1502]: time="2025-03-17T17:49:16.560062225Z" level=info msg="TearDown network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" successfully" Mar 17 17:49:16.560112 containerd[1502]: time="2025-03-17T17:49:16.560073345Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" returns successfully" Mar 17 17:49:16.560888 containerd[1502]: time="2025-03-17T17:49:16.560716596Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:16.560888 containerd[1502]: time="2025-03-17T17:49:16.560857078Z" level=info msg="TearDown network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" successfully" Mar 17 17:49:16.560888 containerd[1502]: time="2025-03-17T17:49:16.560868718Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" returns successfully" Mar 17 17:49:16.563836 kubelet[2846]: I0317 17:49:16.561865 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca" Mar 17 17:49:16.563909 containerd[1502]: time="2025-03-17T17:49:16.562778869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:49:16.563909 containerd[1502]: time="2025-03-17T17:49:16.563174116Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\"" Mar 17 17:49:16.563909 containerd[1502]: time="2025-03-17T17:49:16.563390919Z" level=info msg="Ensure that sandbox f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca in task-service has been cleanup successfully" Mar 17 17:49:16.563909 containerd[1502]: time="2025-03-17T17:49:16.563571402Z" level=info msg="TearDown network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" successfully" Mar 17 17:49:16.563909 containerd[1502]: time="2025-03-17T17:49:16.563778606Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" returns successfully" Mar 17 17:49:16.566514 containerd[1502]: time="2025-03-17T17:49:16.566480610Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" Mar 17 17:49:16.566586 containerd[1502]: time="2025-03-17T17:49:16.566567171Z" level=info msg="TearDown network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" successfully" Mar 17 17:49:16.566586 containerd[1502]: time="2025-03-17T17:49:16.566580771Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" returns successfully" Mar 17 17:49:16.568198 containerd[1502]: time="2025-03-17T17:49:16.568075676Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:16.569883 containerd[1502]: time="2025-03-17T17:49:16.569848625Z" level=info msg="TearDown network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" successfully" Mar 17 17:49:16.569883 containerd[1502]: time="2025-03-17T17:49:16.569872385Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" returns successfully" Mar 17 17:49:16.570606 containerd[1502]: time="2025-03-17T17:49:16.570573276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:3,}" Mar 17 17:49:16.826938 containerd[1502]: time="2025-03-17T17:49:16.826461250Z" level=error msg="Failed to destroy network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.827218 containerd[1502]: time="2025-03-17T17:49:16.827185382Z" level=error msg="Failed to destroy network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.830079 containerd[1502]: time="2025-03-17T17:49:16.830001228Z" level=error msg="encountered an error cleaning up failed sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.830901 containerd[1502]: time="2025-03-17T17:49:16.830860082Z" level=error msg="encountered an error cleaning up failed sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.831505 containerd[1502]: time="2025-03-17T17:49:16.831469052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.831704 kubelet[2846]: E0317 17:49:16.831671 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.831765 containerd[1502]: time="2025-03-17T17:49:16.831604134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.832155 kubelet[2846]: E0317 17:49:16.832101 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:16.832318 kubelet[2846]: E0317 17:49:16.832149 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:16.832318 kubelet[2846]: E0317 17:49:16.832225 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:16.834184 kubelet[2846]: E0317 17:49:16.833996 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.834184 kubelet[2846]: E0317 17:49:16.834078 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:16.834184 kubelet[2846]: E0317 17:49:16.834095 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:16.834303 kubelet[2846]: E0317 17:49:16.834134 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5t9xs" podUID="18fbf695-ee31-4ad3-8e56-31fea597eadd" Mar 17 17:49:16.843179 containerd[1502]: time="2025-03-17T17:49:16.842738556Z" level=error msg="Failed to destroy network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.843179 containerd[1502]: time="2025-03-17T17:49:16.842919279Z" level=error msg="Failed to destroy network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.843619 containerd[1502]: time="2025-03-17T17:49:16.843399207Z" level=error msg="encountered an error cleaning up failed sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.843619 containerd[1502]: time="2025-03-17T17:49:16.843578690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.843882 kubelet[2846]: E0317 17:49:16.843775 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.843882 kubelet[2846]: E0317 17:49:16.843827 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:16.843882 kubelet[2846]: E0317 17:49:16.843846 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:16.844210 kubelet[2846]: E0317 17:49:16.843884 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9n8th" podUID="89e31e82-fcf0-4b12-9877-940dcbb04dfb" Mar 17 17:49:16.844860 containerd[1502]: time="2025-03-17T17:49:16.844634707Z" level=error msg="encountered an error cleaning up failed sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.845988 containerd[1502]: time="2025-03-17T17:49:16.845283597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.847487 kubelet[2846]: E0317 17:49:16.847156 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.847487 kubelet[2846]: E0317 17:49:16.847234 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:16.847487 kubelet[2846]: E0317 17:49:16.847266 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:16.847649 kubelet[2846]: E0317 17:49:16.847304 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" podUID="a74fbbc9-5937-415b-8d68-ed4ea0db44e4" Mar 17 17:49:16.857779 containerd[1502]: time="2025-03-17T17:49:16.857733680Z" level=error msg="Failed to destroy network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.860423 containerd[1502]: time="2025-03-17T17:49:16.860227801Z" level=error msg="encountered an error cleaning up failed sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.860423 containerd[1502]: time="2025-03-17T17:49:16.860305042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.860934 kubelet[2846]: E0317 17:49:16.860607 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.860934 kubelet[2846]: E0317 17:49:16.860674 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:16.860934 kubelet[2846]: E0317 17:49:16.860692 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:16.863075 kubelet[2846]: E0317 17:49:16.860733 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" Mar 17 17:49:16.863153 containerd[1502]: time="2025-03-17T17:49:16.862759562Z" level=error msg="Failed to destroy network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.863644 containerd[1502]: time="2025-03-17T17:49:16.863611536Z" level=error msg="encountered an error cleaning up failed sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.863972 containerd[1502]: time="2025-03-17T17:49:16.863946262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.864626 kubelet[2846]: E0317 17:49:16.864444 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:16.864626 kubelet[2846]: E0317 17:49:16.864551 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:16.864874 kubelet[2846]: E0317 17:49:16.864573 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:16.864874 kubelet[2846]: E0317 17:49:16.864831 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" podUID="74b237a4-f5d5-48d3-8f38-13c8c4872091" Mar 17 17:49:17.199345 systemd[1]: run-netns-cni\x2d0adf087a\x2d6a19\x2d8f45\x2de63b\x2d128ff1e359d4.mount: Deactivated successfully. Mar 17 17:49:17.199706 systemd[1]: run-netns-cni\x2db7f2dd2e\x2dcae8\x2dc3c3\x2dec77\x2dcfd898289937.mount: Deactivated successfully. Mar 17 17:49:17.568111 kubelet[2846]: I0317 17:49:17.568079 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361" Mar 17 17:49:17.570768 containerd[1502]: time="2025-03-17T17:49:17.570586402Z" level=info msg="StopPodSandbox for \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\"" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.570781566Z" level=info msg="Ensure that sandbox d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361 in task-service has been cleanup successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.571262934Z" level=info msg="TearDown network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.571286694Z" level=info msg="StopPodSandbox for \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" returns successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.571738422Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\"" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.571814223Z" level=info msg="TearDown network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.571823143Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" returns successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.572381912Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.572446554Z" level=info msg="TearDown network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.572455354Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" returns successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.572818160Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.572895881Z" level=info msg="TearDown network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.572905161Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" returns successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.573759375Z" level=info msg="StopPodSandbox for \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\"" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.573901978Z" level=info msg="Ensure that sandbox 806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1 in task-service has been cleanup successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.573940138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.574084621Z" level=info msg="TearDown network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" successfully" Mar 17 17:49:17.575616 containerd[1502]: time="2025-03-17T17:49:17.574101581Z" level=info msg="StopPodSandbox for \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" returns successfully" Mar 17 17:49:17.575936 kubelet[2846]: I0317 17:49:17.573296 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1" Mar 17 17:49:17.577623 systemd[1]: run-netns-cni\x2dc1056825\x2d0861\x2dc732\x2db048\x2ddd0ce3ebcd6b.mount: Deactivated successfully. Mar 17 17:49:17.577719 systemd[1]: run-netns-cni\x2d93799901\x2d9e5b\x2df49f\x2d3b5f\x2dbbcd6e9e0c19.mount: Deactivated successfully. Mar 17 17:49:17.581820 containerd[1502]: time="2025-03-17T17:49:17.580603290Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\"" Mar 17 17:49:17.581820 containerd[1502]: time="2025-03-17T17:49:17.580863094Z" level=info msg="TearDown network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" successfully" Mar 17 17:49:17.581820 containerd[1502]: time="2025-03-17T17:49:17.580877214Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" returns successfully" Mar 17 17:49:17.584553 kubelet[2846]: I0317 17:49:17.583653 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c" Mar 17 17:49:17.584665 containerd[1502]: time="2025-03-17T17:49:17.584501515Z" level=info msg="StopPodSandbox for \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\"" Mar 17 17:49:17.586319 containerd[1502]: time="2025-03-17T17:49:17.584827080Z" level=info msg="Ensure that sandbox aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c in task-service has been cleanup successfully" Mar 17 17:49:17.586898 containerd[1502]: time="2025-03-17T17:49:17.586723232Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" Mar 17 17:49:17.587095 containerd[1502]: time="2025-03-17T17:49:17.587036877Z" level=info msg="TearDown network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" successfully" Mar 17 17:49:17.587169 containerd[1502]: time="2025-03-17T17:49:17.587153519Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" returns successfully" Mar 17 17:49:17.587868 systemd[1]: run-netns-cni\x2dd6bba823\x2db3cb\x2db7a5\x2d019e\x2d0baba12be0f6.mount: Deactivated successfully. Mar 17 17:49:17.590929 containerd[1502]: time="2025-03-17T17:49:17.590704658Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:17.590929 containerd[1502]: time="2025-03-17T17:49:17.590818740Z" level=info msg="TearDown network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" successfully" Mar 17 17:49:17.590929 containerd[1502]: time="2025-03-17T17:49:17.590828860Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" returns successfully" Mar 17 17:49:17.592327 containerd[1502]: time="2025-03-17T17:49:17.592291085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:4,}" Mar 17 17:49:17.592881 containerd[1502]: time="2025-03-17T17:49:17.592858374Z" level=info msg="TearDown network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" successfully" Mar 17 17:49:17.593779 containerd[1502]: time="2025-03-17T17:49:17.593393823Z" level=info msg="StopPodSandbox for \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" returns successfully" Mar 17 17:49:17.595403 containerd[1502]: time="2025-03-17T17:49:17.594476321Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\"" Mar 17 17:49:17.595403 containerd[1502]: time="2025-03-17T17:49:17.594634924Z" level=info msg="TearDown network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" successfully" Mar 17 17:49:17.595403 containerd[1502]: time="2025-03-17T17:49:17.594647644Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" returns successfully" Mar 17 17:49:17.595403 containerd[1502]: time="2025-03-17T17:49:17.595081971Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" Mar 17 17:49:17.595403 containerd[1502]: time="2025-03-17T17:49:17.595177413Z" level=info msg="TearDown network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" successfully" Mar 17 17:49:17.595403 containerd[1502]: time="2025-03-17T17:49:17.595207133Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" returns successfully" Mar 17 17:49:17.595576 kubelet[2846]: I0317 17:49:17.595494 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278" Mar 17 17:49:17.596502 containerd[1502]: time="2025-03-17T17:49:17.596475635Z" level=info msg="StopPodSandbox for \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\"" Mar 17 17:49:17.597016 containerd[1502]: time="2025-03-17T17:49:17.596988763Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:17.597162 containerd[1502]: time="2025-03-17T17:49:17.597139446Z" level=info msg="TearDown network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" successfully" Mar 17 17:49:17.597162 containerd[1502]: time="2025-03-17T17:49:17.597156766Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" returns successfully" Mar 17 17:49:17.598930 containerd[1502]: time="2025-03-17T17:49:17.597336289Z" level=info msg="Ensure that sandbox 460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278 in task-service has been cleanup successfully" Mar 17 17:49:17.599746 containerd[1502]: time="2025-03-17T17:49:17.598647751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:4,}" Mar 17 17:49:17.600280 containerd[1502]: time="2025-03-17T17:49:17.599899132Z" level=info msg="TearDown network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" successfully" Mar 17 17:49:17.600280 containerd[1502]: time="2025-03-17T17:49:17.599927732Z" level=info msg="StopPodSandbox for \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" returns successfully" Mar 17 17:49:17.601152 systemd[1]: run-netns-cni\x2d39b986f4\x2d43ab\x2d3810\x2d135c\x2d8aac683fa5d3.mount: Deactivated successfully. Mar 17 17:49:17.604964 containerd[1502]: time="2025-03-17T17:49:17.604553369Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\"" Mar 17 17:49:17.604964 containerd[1502]: time="2025-03-17T17:49:17.604648811Z" level=info msg="TearDown network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" successfully" Mar 17 17:49:17.604964 containerd[1502]: time="2025-03-17T17:49:17.604658531Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" returns successfully" Mar 17 17:49:17.607196 containerd[1502]: time="2025-03-17T17:49:17.606992090Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" Mar 17 17:49:17.607196 containerd[1502]: time="2025-03-17T17:49:17.607105092Z" level=info msg="TearDown network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" successfully" Mar 17 17:49:17.607196 containerd[1502]: time="2025-03-17T17:49:17.607116972Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" returns successfully" Mar 17 17:49:17.608434 containerd[1502]: time="2025-03-17T17:49:17.608333472Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:17.608724 containerd[1502]: time="2025-03-17T17:49:17.608573876Z" level=info msg="TearDown network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" successfully" Mar 17 17:49:17.608724 containerd[1502]: time="2025-03-17T17:49:17.608595277Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" returns successfully" Mar 17 17:49:17.608977 kubelet[2846]: I0317 17:49:17.608951 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b" Mar 17 17:49:17.610376 containerd[1502]: time="2025-03-17T17:49:17.609996300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:49:17.610918 containerd[1502]: time="2025-03-17T17:49:17.610875515Z" level=info msg="StopPodSandbox for \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\"" Mar 17 17:49:17.611092 containerd[1502]: time="2025-03-17T17:49:17.611073358Z" level=info msg="Ensure that sandbox 512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b in task-service has been cleanup successfully" Mar 17 17:49:17.612979 systemd[1]: run-netns-cni\x2d7b15b82a\x2d7b6b\x2d87f0\x2dac24\x2d8783c71e1732.mount: Deactivated successfully. Mar 17 17:49:17.613465 containerd[1502]: time="2025-03-17T17:49:17.613311595Z" level=info msg="TearDown network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" successfully" Mar 17 17:49:17.613465 containerd[1502]: time="2025-03-17T17:49:17.613337036Z" level=info msg="StopPodSandbox for \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" returns successfully" Mar 17 17:49:17.617134 containerd[1502]: time="2025-03-17T17:49:17.617035538Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\"" Mar 17 17:49:17.617279 containerd[1502]: time="2025-03-17T17:49:17.617207901Z" level=info msg="TearDown network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" successfully" Mar 17 17:49:17.617279 containerd[1502]: time="2025-03-17T17:49:17.617223461Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" returns successfully" Mar 17 17:49:17.617636 containerd[1502]: time="2025-03-17T17:49:17.617568347Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" Mar 17 17:49:17.618047 containerd[1502]: time="2025-03-17T17:49:17.617949473Z" level=info msg="TearDown network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" successfully" Mar 17 17:49:17.618047 containerd[1502]: time="2025-03-17T17:49:17.618033634Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" returns successfully" Mar 17 17:49:17.618953 containerd[1502]: time="2025-03-17T17:49:17.618750766Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:17.618953 containerd[1502]: time="2025-03-17T17:49:17.618828088Z" level=info msg="TearDown network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" successfully" Mar 17 17:49:17.618953 containerd[1502]: time="2025-03-17T17:49:17.618838568Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" returns successfully" Mar 17 17:49:17.619753 containerd[1502]: time="2025-03-17T17:49:17.619557260Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:17.619753 containerd[1502]: time="2025-03-17T17:49:17.619621341Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:17.620350 containerd[1502]: time="2025-03-17T17:49:17.620298872Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:17.622018 containerd[1502]: time="2025-03-17T17:49:17.621984820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:5,}" Mar 17 17:49:17.623623 kubelet[2846]: I0317 17:49:17.622499 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3" Mar 17 17:49:17.624210 containerd[1502]: time="2025-03-17T17:49:17.624179177Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\"" Mar 17 17:49:17.625075 containerd[1502]: time="2025-03-17T17:49:17.624602824Z" level=info msg="Ensure that sandbox 1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3 in task-service has been cleanup successfully" Mar 17 17:49:17.627422 containerd[1502]: time="2025-03-17T17:49:17.627386950Z" level=info msg="TearDown network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" successfully" Mar 17 17:49:17.627422 containerd[1502]: time="2025-03-17T17:49:17.627424071Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" returns successfully" Mar 17 17:49:17.628454 containerd[1502]: time="2025-03-17T17:49:17.628417288Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" Mar 17 17:49:17.628539 containerd[1502]: time="2025-03-17T17:49:17.628523969Z" level=info msg="TearDown network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" successfully" Mar 17 17:49:17.628539 containerd[1502]: time="2025-03-17T17:49:17.628536810Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" returns successfully" Mar 17 17:49:17.630118 containerd[1502]: time="2025-03-17T17:49:17.630034755Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:17.630178 containerd[1502]: time="2025-03-17T17:49:17.630163557Z" level=info msg="TearDown network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" successfully" Mar 17 17:49:17.630178 containerd[1502]: time="2025-03-17T17:49:17.630175197Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" returns successfully" Mar 17 17:49:17.630736 containerd[1502]: time="2025-03-17T17:49:17.630703726Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:17.630831 containerd[1502]: time="2025-03-17T17:49:17.630800887Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:17.630831 containerd[1502]: time="2025-03-17T17:49:17.630813288Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:17.632847 containerd[1502]: time="2025-03-17T17:49:17.632805561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:4,}" Mar 17 17:49:17.834251 containerd[1502]: time="2025-03-17T17:49:17.832633376Z" level=error msg="Failed to destroy network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.834880 containerd[1502]: time="2025-03-17T17:49:17.834725730Z" level=error msg="encountered an error cleaning up failed sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.834880 containerd[1502]: time="2025-03-17T17:49:17.834790332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.835190 kubelet[2846]: E0317 17:49:17.835142 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.835269 kubelet[2846]: E0317 17:49:17.835207 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:17.835269 kubelet[2846]: E0317 17:49:17.835227 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" Mar 17 17:49:17.835390 kubelet[2846]: E0317 17:49:17.835272 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-nd4n6_calico-apiserver(a74fbbc9-5937-415b-8d68-ed4ea0db44e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" podUID="a74fbbc9-5937-415b-8d68-ed4ea0db44e4" Mar 17 17:49:17.871470 containerd[1502]: time="2025-03-17T17:49:17.871424263Z" level=error msg="Failed to destroy network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.874782 containerd[1502]: time="2025-03-17T17:49:17.874485834Z" level=error msg="encountered an error cleaning up failed sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.874957 containerd[1502]: time="2025-03-17T17:49:17.874931721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.875907 kubelet[2846]: E0317 17:49:17.875862 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.876040 kubelet[2846]: E0317 17:49:17.875924 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:17.876040 kubelet[2846]: E0317 17:49:17.875943 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht6qb" Mar 17 17:49:17.876040 kubelet[2846]: E0317 17:49:17.875987 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht6qb_calico-system(37c19387-6a1a-435e-b624-cd3e3f772523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht6qb" podUID="37c19387-6a1a-435e-b624-cd3e3f772523" Mar 17 17:49:17.876799 containerd[1502]: time="2025-03-17T17:49:17.876513188Z" level=error msg="Failed to destroy network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.877451 containerd[1502]: time="2025-03-17T17:49:17.877425843Z" level=error msg="encountered an error cleaning up failed sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.877633 containerd[1502]: time="2025-03-17T17:49:17.877589766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.878475 kubelet[2846]: E0317 17:49:17.877887 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.878475 kubelet[2846]: E0317 17:49:17.878384 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:17.878475 kubelet[2846]: E0317 17:49:17.878404 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n8th" Mar 17 17:49:17.878606 kubelet[2846]: E0317 17:49:17.878451 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9n8th_kube-system(89e31e82-fcf0-4b12-9877-940dcbb04dfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9n8th" podUID="89e31e82-fcf0-4b12-9877-940dcbb04dfb" Mar 17 17:49:17.887775 containerd[1502]: time="2025-03-17T17:49:17.887339288Z" level=error msg="Failed to destroy network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.887775 containerd[1502]: time="2025-03-17T17:49:17.887645454Z" level=error msg="encountered an error cleaning up failed sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.887775 containerd[1502]: time="2025-03-17T17:49:17.887696054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.888404 kubelet[2846]: E0317 17:49:17.888043 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.888404 kubelet[2846]: E0317 17:49:17.888136 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:17.888404 kubelet[2846]: E0317 17:49:17.888159 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5t9xs" Mar 17 17:49:17.888569 kubelet[2846]: E0317 17:49:17.888203 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5t9xs_kube-system(18fbf695-ee31-4ad3-8e56-31fea597eadd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5t9xs" podUID="18fbf695-ee31-4ad3-8e56-31fea597eadd" Mar 17 17:49:17.894494 containerd[1502]: time="2025-03-17T17:49:17.894017560Z" level=error msg="Failed to destroy network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.894750 containerd[1502]: time="2025-03-17T17:49:17.894665651Z" level=error msg="encountered an error cleaning up failed sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.894750 containerd[1502]: time="2025-03-17T17:49:17.894737732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.895006 kubelet[2846]: E0317 17:49:17.894929 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.895006 kubelet[2846]: E0317 17:49:17.894981 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:17.895006 kubelet[2846]: E0317 17:49:17.895000 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" Mar 17 17:49:17.895144 kubelet[2846]: E0317 17:49:17.895044 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5545ddd8-bblw2_calico-apiserver(74b237a4-f5d5-48d3-8f38-13c8c4872091)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" podUID="74b237a4-f5d5-48d3-8f38-13c8c4872091" Mar 17 17:49:17.907899 containerd[1502]: time="2025-03-17T17:49:17.907699988Z" level=error msg="Failed to destroy network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.908133 containerd[1502]: time="2025-03-17T17:49:17.908035674Z" level=error msg="encountered an error cleaning up failed sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.908133 containerd[1502]: time="2025-03-17T17:49:17.908116035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.908970 kubelet[2846]: E0317 17:49:17.908499 2846 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:17.908970 kubelet[2846]: E0317 17:49:17.908557 2846 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:17.908970 kubelet[2846]: E0317 17:49:17.908576 2846 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" Mar 17 17:49:17.909139 kubelet[2846]: E0317 17:49:17.908624 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb9b7ff55-f6s4k_calico-system(447aab82-3c54-4fc9-a563-99b96e52f28a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" Mar 17 17:49:18.197189 systemd[1]: run-netns-cni\x2d869937e5\x2d3f6c\x2d62ee\x2d930e\x2d159642339d86.mount: Deactivated successfully. Mar 17 17:49:18.240378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861075517.mount: Deactivated successfully. Mar 17 17:49:18.264106 containerd[1502]: time="2025-03-17T17:49:18.263997590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:18.265329 containerd[1502]: time="2025-03-17T17:49:18.265242371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 17 17:49:18.266095 containerd[1502]: time="2025-03-17T17:49:18.265987744Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:18.269827 containerd[1502]: time="2025-03-17T17:49:18.269606445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:18.271494 containerd[1502]: time="2025-03-17T17:49:18.271381356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 4.918704892s" Mar 17 17:49:18.271841 containerd[1502]: time="2025-03-17T17:49:18.271657440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 17 17:49:18.291543 containerd[1502]: time="2025-03-17T17:49:18.291501499Z" level=info msg="CreateContainer within sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:49:18.310553 containerd[1502]: time="2025-03-17T17:49:18.310491423Z" level=info msg="CreateContainer within sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\"" Mar 17 17:49:18.311923 containerd[1502]: time="2025-03-17T17:49:18.311721764Z" level=info msg="StartContainer for \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\"" Mar 17 17:49:18.341585 systemd[1]: Started cri-containerd-a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0.scope - libcontainer container a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0. Mar 17 17:49:18.377013 containerd[1502]: time="2025-03-17T17:49:18.376942396Z" level=info msg="StartContainer for \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\" returns successfully" Mar 17 17:49:18.485507 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:49:18.485670 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:49:18.627952 kubelet[2846]: I0317 17:49:18.627910 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4" Mar 17 17:49:18.631347 containerd[1502]: time="2025-03-17T17:49:18.630271475Z" level=info msg="StopPodSandbox for \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\"" Mar 17 17:49:18.631347 containerd[1502]: time="2025-03-17T17:49:18.630486159Z" level=info msg="Ensure that sandbox d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4 in task-service has been cleanup successfully" Mar 17 17:49:18.631347 containerd[1502]: time="2025-03-17T17:49:18.630689683Z" level=info msg="TearDown network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" successfully" Mar 17 17:49:18.631347 containerd[1502]: time="2025-03-17T17:49:18.630704083Z" level=info msg="StopPodSandbox for \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" returns successfully" Mar 17 17:49:18.632269 containerd[1502]: time="2025-03-17T17:49:18.631575978Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\"" Mar 17 17:49:18.632269 containerd[1502]: time="2025-03-17T17:49:18.631651659Z" level=info msg="TearDown network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" successfully" Mar 17 17:49:18.632269 containerd[1502]: time="2025-03-17T17:49:18.631660899Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" returns successfully" Mar 17 17:49:18.632414 containerd[1502]: time="2025-03-17T17:49:18.632270990Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" Mar 17 17:49:18.632414 containerd[1502]: time="2025-03-17T17:49:18.632388952Z" level=info msg="TearDown network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" successfully" Mar 17 17:49:18.632414 containerd[1502]: time="2025-03-17T17:49:18.632401072Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" returns successfully" Mar 17 17:49:18.633203 containerd[1502]: time="2025-03-17T17:49:18.633017202Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:18.633203 containerd[1502]: time="2025-03-17T17:49:18.633155645Z" level=info msg="TearDown network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" successfully" Mar 17 17:49:18.633203 containerd[1502]: time="2025-03-17T17:49:18.633168525Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" returns successfully" Mar 17 17:49:18.635606 containerd[1502]: time="2025-03-17T17:49:18.634328585Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:18.635606 containerd[1502]: time="2025-03-17T17:49:18.634659270Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:18.635606 containerd[1502]: time="2025-03-17T17:49:18.634684231Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:18.635752 kubelet[2846]: I0317 17:49:18.633898 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152" Mar 17 17:49:18.637326 containerd[1502]: time="2025-03-17T17:49:18.636550903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:5,}" Mar 17 17:49:18.639954 containerd[1502]: time="2025-03-17T17:49:18.639919960Z" level=info msg="StopPodSandbox for \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\"" Mar 17 17:49:18.641382 containerd[1502]: time="2025-03-17T17:49:18.641145541Z" level=info msg="Ensure that sandbox ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152 in task-service has been cleanup successfully" Mar 17 17:49:18.641836 containerd[1502]: time="2025-03-17T17:49:18.641530947Z" level=info msg="TearDown network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\" successfully" Mar 17 17:49:18.641836 containerd[1502]: time="2025-03-17T17:49:18.641574268Z" level=info msg="StopPodSandbox for \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\" returns successfully" Mar 17 17:49:18.642774 containerd[1502]: time="2025-03-17T17:49:18.642752168Z" level=info msg="StopPodSandbox for \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\"" Mar 17 17:49:18.643513 containerd[1502]: time="2025-03-17T17:49:18.643431860Z" level=info msg="TearDown network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" successfully" Mar 17 17:49:18.643721 containerd[1502]: time="2025-03-17T17:49:18.643556342Z" level=info msg="StopPodSandbox for \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" returns successfully" Mar 17 17:49:18.644421 containerd[1502]: time="2025-03-17T17:49:18.644214193Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\"" Mar 17 17:49:18.644421 containerd[1502]: time="2025-03-17T17:49:18.644319195Z" level=info msg="TearDown network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" successfully" Mar 17 17:49:18.644421 containerd[1502]: time="2025-03-17T17:49:18.644329315Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" returns successfully" Mar 17 17:49:18.645973 containerd[1502]: time="2025-03-17T17:49:18.645810660Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" Mar 17 17:49:18.645973 containerd[1502]: time="2025-03-17T17:49:18.645904182Z" level=info msg="TearDown network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" successfully" Mar 17 17:49:18.645973 containerd[1502]: time="2025-03-17T17:49:18.645914982Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" returns successfully" Mar 17 17:49:18.647221 containerd[1502]: time="2025-03-17T17:49:18.646771317Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:18.647221 containerd[1502]: time="2025-03-17T17:49:18.647107843Z" level=info msg="TearDown network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" successfully" Mar 17 17:49:18.647221 containerd[1502]: time="2025-03-17T17:49:18.647119923Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" returns successfully" Mar 17 17:49:18.648423 kubelet[2846]: I0317 17:49:18.648342 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107" Mar 17 17:49:18.652025 containerd[1502]: time="2025-03-17T17:49:18.651676120Z" level=info msg="StopPodSandbox for \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\"" Mar 17 17:49:18.652025 containerd[1502]: time="2025-03-17T17:49:18.651861604Z" level=info msg="Ensure that sandbox 63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107 in task-service has been cleanup successfully" Mar 17 17:49:18.653129 containerd[1502]: time="2025-03-17T17:49:18.653091145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:49:18.654366 containerd[1502]: time="2025-03-17T17:49:18.653463631Z" level=info msg="TearDown network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\" successfully" Mar 17 17:49:18.654366 containerd[1502]: time="2025-03-17T17:49:18.654319966Z" level=info msg="StopPodSandbox for \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\" returns successfully" Mar 17 17:49:18.655621 containerd[1502]: time="2025-03-17T17:49:18.655118739Z" level=info msg="StopPodSandbox for \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\"" Mar 17 17:49:18.655621 containerd[1502]: time="2025-03-17T17:49:18.655209021Z" level=info msg="TearDown network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" successfully" Mar 17 17:49:18.655621 containerd[1502]: time="2025-03-17T17:49:18.655217341Z" level=info msg="StopPodSandbox for \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" returns successfully" Mar 17 17:49:18.656139 containerd[1502]: time="2025-03-17T17:49:18.656080436Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\"" Mar 17 17:49:18.656349 containerd[1502]: time="2025-03-17T17:49:18.656294079Z" level=info msg="TearDown network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" successfully" Mar 17 17:49:18.656349 containerd[1502]: time="2025-03-17T17:49:18.656310160Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" returns successfully" Mar 17 17:49:18.656936 containerd[1502]: time="2025-03-17T17:49:18.656894689Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" Mar 17 17:49:18.657117 containerd[1502]: time="2025-03-17T17:49:18.657047692Z" level=info msg="TearDown network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" successfully" Mar 17 17:49:18.657117 containerd[1502]: time="2025-03-17T17:49:18.657098053Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" returns successfully" Mar 17 17:49:18.660856 containerd[1502]: time="2025-03-17T17:49:18.660656994Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:18.663497 containerd[1502]: time="2025-03-17T17:49:18.663466162Z" level=info msg="TearDown network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" successfully" Mar 17 17:49:18.664162 containerd[1502]: time="2025-03-17T17:49:18.664136173Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" returns successfully" Mar 17 17:49:18.668430 containerd[1502]: time="2025-03-17T17:49:18.668197802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:5,}" Mar 17 17:49:18.671584 kubelet[2846]: I0317 17:49:18.671444 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc" Mar 17 17:49:18.680969 containerd[1502]: time="2025-03-17T17:49:18.680551533Z" level=info msg="StopPodSandbox for \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\"" Mar 17 17:49:18.681644 containerd[1502]: time="2025-03-17T17:49:18.681598111Z" level=info msg="Ensure that sandbox 299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc in task-service has been cleanup successfully" Mar 17 17:49:18.682328 containerd[1502]: time="2025-03-17T17:49:18.682199001Z" level=info msg="TearDown network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\" successfully" Mar 17 17:49:18.682328 containerd[1502]: time="2025-03-17T17:49:18.682219521Z" level=info msg="StopPodSandbox for \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\" returns successfully" Mar 17 17:49:18.683792 containerd[1502]: time="2025-03-17T17:49:18.683643506Z" level=info msg="StopPodSandbox for \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\"" Mar 17 17:49:18.683792 containerd[1502]: time="2025-03-17T17:49:18.683754187Z" level=info msg="TearDown network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" successfully" Mar 17 17:49:18.683792 containerd[1502]: time="2025-03-17T17:49:18.683765588Z" level=info msg="StopPodSandbox for \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" returns successfully" Mar 17 17:49:18.684978 containerd[1502]: time="2025-03-17T17:49:18.684947928Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\"" Mar 17 17:49:18.685148 containerd[1502]: time="2025-03-17T17:49:18.685023729Z" level=info msg="TearDown network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" successfully" Mar 17 17:49:18.685148 containerd[1502]: time="2025-03-17T17:49:18.685033929Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" returns successfully" Mar 17 17:49:18.685898 containerd[1502]: time="2025-03-17T17:49:18.685868064Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" Mar 17 17:49:18.686024 containerd[1502]: time="2025-03-17T17:49:18.685947785Z" level=info msg="TearDown network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" successfully" Mar 17 17:49:18.686024 containerd[1502]: time="2025-03-17T17:49:18.685957105Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" returns successfully" Mar 17 17:49:18.686867 containerd[1502]: time="2025-03-17T17:49:18.686725758Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:18.687514 kubelet[2846]: I0317 17:49:18.687463 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616" Mar 17 17:49:18.688213 containerd[1502]: time="2025-03-17T17:49:18.688177263Z" level=info msg="TearDown network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" successfully" Mar 17 17:49:18.688470 containerd[1502]: time="2025-03-17T17:49:18.688300585Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" returns successfully" Mar 17 17:49:18.688576 containerd[1502]: time="2025-03-17T17:49:18.688549109Z" level=info msg="StopPodSandbox for \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\"" Mar 17 17:49:18.696963 containerd[1502]: time="2025-03-17T17:49:18.696889731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:5,}" Mar 17 17:49:18.699411 containerd[1502]: time="2025-03-17T17:49:18.699197091Z" level=info msg="Ensure that sandbox 9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616 in task-service has been cleanup successfully" Mar 17 17:49:18.699857 containerd[1502]: time="2025-03-17T17:49:18.699696459Z" level=info msg="TearDown network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\" successfully" Mar 17 17:49:18.699857 containerd[1502]: time="2025-03-17T17:49:18.699723260Z" level=info msg="StopPodSandbox for \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\" returns successfully" Mar 17 17:49:18.700954 containerd[1502]: time="2025-03-17T17:49:18.700880760Z" level=info msg="StopPodSandbox for \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\"" Mar 17 17:49:18.701423 containerd[1502]: time="2025-03-17T17:49:18.700973881Z" level=info msg="TearDown network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" successfully" Mar 17 17:49:18.701423 containerd[1502]: time="2025-03-17T17:49:18.700984561Z" level=info msg="StopPodSandbox for \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" returns successfully" Mar 17 17:49:18.703498 containerd[1502]: time="2025-03-17T17:49:18.703438723Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\"" Mar 17 17:49:18.703553 containerd[1502]: time="2025-03-17T17:49:18.703522765Z" level=info msg="TearDown network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" successfully" Mar 17 17:49:18.703553 containerd[1502]: time="2025-03-17T17:49:18.703533165Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" returns successfully" Mar 17 17:49:18.706203 containerd[1502]: time="2025-03-17T17:49:18.706058688Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" Mar 17 17:49:18.706412 containerd[1502]: time="2025-03-17T17:49:18.706342533Z" level=info msg="TearDown network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" successfully" Mar 17 17:49:18.706506 containerd[1502]: time="2025-03-17T17:49:18.706492615Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" returns successfully" Mar 17 17:49:18.707289 kubelet[2846]: I0317 17:49:18.707256 2846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f" Mar 17 17:49:18.708033 containerd[1502]: time="2025-03-17T17:49:18.707891599Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:18.708513 containerd[1502]: time="2025-03-17T17:49:18.708451969Z" level=info msg="TearDown network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" successfully" Mar 17 17:49:18.708782 containerd[1502]: time="2025-03-17T17:49:18.708470649Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" returns successfully" Mar 17 17:49:18.718468 containerd[1502]: time="2025-03-17T17:49:18.718133294Z" level=info msg="StopPodSandbox for \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\"" Mar 17 17:49:18.718727 containerd[1502]: time="2025-03-17T17:49:18.718611462Z" level=info msg="Ensure that sandbox b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f in task-service has been cleanup successfully" Mar 17 17:49:18.718987 containerd[1502]: time="2025-03-17T17:49:18.718849186Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:18.720572 containerd[1502]: time="2025-03-17T17:49:18.720436213Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:18.720753 containerd[1502]: time="2025-03-17T17:49:18.720646737Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:18.722810 containerd[1502]: time="2025-03-17T17:49:18.722746652Z" level=info msg="TearDown network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\" successfully" Mar 17 17:49:18.723126 containerd[1502]: time="2025-03-17T17:49:18.723023897Z" level=info msg="StopPodSandbox for \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\" returns successfully" Mar 17 17:49:18.723234 containerd[1502]: time="2025-03-17T17:49:18.723212820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:6,}" Mar 17 17:49:18.725218 containerd[1502]: time="2025-03-17T17:49:18.725186094Z" level=info msg="StopPodSandbox for \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\"" Mar 17 17:49:18.725680 containerd[1502]: time="2025-03-17T17:49:18.725620061Z" level=info msg="TearDown network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" successfully" Mar 17 17:49:18.725680 containerd[1502]: time="2025-03-17T17:49:18.725638182Z" level=info msg="StopPodSandbox for \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" returns successfully" Mar 17 17:49:18.729635 containerd[1502]: time="2025-03-17T17:49:18.729516408Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\"" Mar 17 17:49:18.729635 containerd[1502]: time="2025-03-17T17:49:18.729622730Z" level=info msg="TearDown network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" successfully" Mar 17 17:49:18.729635 containerd[1502]: time="2025-03-17T17:49:18.729633530Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" returns successfully" Mar 17 17:49:18.736442 containerd[1502]: time="2025-03-17T17:49:18.734259529Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" Mar 17 17:49:18.736442 containerd[1502]: time="2025-03-17T17:49:18.734382931Z" level=info msg="TearDown network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" successfully" Mar 17 17:49:18.736442 containerd[1502]: time="2025-03-17T17:49:18.734393531Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" returns successfully" Mar 17 17:49:18.738871 containerd[1502]: time="2025-03-17T17:49:18.738035113Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:18.738871 containerd[1502]: time="2025-03-17T17:49:18.738212676Z" level=info msg="TearDown network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" successfully" Mar 17 17:49:18.738871 containerd[1502]: time="2025-03-17T17:49:18.738226196Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" returns successfully" Mar 17 17:49:18.739792 containerd[1502]: time="2025-03-17T17:49:18.739499578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:49:19.124004 systemd-networkd[1397]: calic1f3527c2f4: Link UP Mar 17 17:49:19.125294 systemd-networkd[1397]: calic1f3527c2f4: Gained carrier Mar 17 17:49:19.152052 kubelet[2846]: I0317 17:49:19.151975 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jsxrm" podStartSLOduration=2.797313219 podStartE2EDuration="16.151861103s" podCreationTimestamp="2025-03-17 17:49:03 +0000 UTC" firstStartedPulling="2025-03-17 17:49:04.918132054 +0000 UTC m=+28.848408768" lastFinishedPulling="2025-03-17 17:49:18.272679938 +0000 UTC m=+42.202956652" observedRunningTime="2025-03-17 17:49:18.783897215 +0000 UTC m=+42.714173929" watchObservedRunningTime="2025-03-17 17:49:19.151861103 +0000 UTC m=+43.082137817" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:18.714 [INFO][4651] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:18.766 [INFO][4651] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0 calico-kube-controllers-6fb9b7ff55- calico-system 447aab82-3c54-4fc9-a563-99b96e52f28a 733 0 2025-03-17 17:49:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fb9b7ff55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230-1-0-b-a06069b96b calico-kube-controllers-6fb9b7ff55-f6s4k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1f3527c2f4 [] []}} ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:18.766 [INFO][4651] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:18.975 [INFO][4697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.003 [INFO][4697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f1190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230-1-0-b-a06069b96b", "pod":"calico-kube-controllers-6fb9b7ff55-f6s4k", "timestamp":"2025-03-17 17:49:18.975862849 +0000 UTC"}, Hostname:"ci-4230-1-0-b-a06069b96b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.003 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.003 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.003 [INFO][4697] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-1-0-b-a06069b96b' Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.009 [INFO][4697] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.027 [INFO][4697] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.045 [INFO][4697] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.054 [INFO][4697] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.059 [INFO][4697] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.059 [INFO][4697] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.064 [INFO][4697] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153 Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.075 [INFO][4697] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.092 [INFO][4697] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.193/26] block=192.168.32.192/26 handle="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.092 [INFO][4697] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.193/26] handle="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.095 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:19.169625 containerd[1502]: 2025-03-17 17:49:19.095 [INFO][4697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.193/26] IPv6=[] ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:19.170266 containerd[1502]: 2025-03-17 17:49:19.114 [INFO][4651] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0", GenerateName:"calico-kube-controllers-6fb9b7ff55-", Namespace:"calico-system", SelfLink:"", UID:"447aab82-3c54-4fc9-a563-99b96e52f28a", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb9b7ff55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"", Pod:"calico-kube-controllers-6fb9b7ff55-f6s4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1f3527c2f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.170266 containerd[1502]: 2025-03-17 17:49:19.115 [INFO][4651] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.193/32] ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:19.170266 containerd[1502]: 2025-03-17 17:49:19.115 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1f3527c2f4 ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:19.170266 containerd[1502]: 2025-03-17 17:49:19.126 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:19.170266 containerd[1502]: 2025-03-17 17:49:19.126 [INFO][4651] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0", GenerateName:"calico-kube-controllers-6fb9b7ff55-", Namespace:"calico-system", SelfLink:"", UID:"447aab82-3c54-4fc9-a563-99b96e52f28a", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb9b7ff55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153", Pod:"calico-kube-controllers-6fb9b7ff55-f6s4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1f3527c2f4", MAC:"62:1a:81:63:f4:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.170266 containerd[1502]: 2025-03-17 17:49:19.162 [INFO][4651] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Namespace="calico-system" Pod="calico-kube-controllers-6fb9b7ff55-f6s4k" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:19.210746 systemd[1]: run-netns-cni\x2d01218ac9\x2d6ef1\x2d097f\x2dd860\x2daa46ebf9f1fa.mount: Deactivated successfully. Mar 17 17:49:19.211592 systemd[1]: run-netns-cni\x2d42984196\x2da2d5\x2d3d1a\x2d9fc5\x2d1a1f02364813.mount: Deactivated successfully. Mar 17 17:49:19.211641 systemd[1]: run-netns-cni\x2de733215a\x2d01d7\x2d6f53\x2d4574\x2d71f43d9c8789.mount: Deactivated successfully. Mar 17 17:49:19.211690 systemd[1]: run-netns-cni\x2d6a7d74a4\x2d61dc\x2d9a62\x2db698\x2dc25dac06c9ea.mount: Deactivated successfully. Mar 17 17:49:19.211733 systemd[1]: run-netns-cni\x2d646a68d6\x2d6cca\x2ddcfc\x2db081\x2d9e97f52c318a.mount: Deactivated successfully. Mar 17 17:49:19.212335 systemd[1]: run-netns-cni\x2d3ac6b2fe\x2d4079\x2d5b6d\x2d13e6\x2d34c7f7169f83.mount: Deactivated successfully. Mar 17 17:49:19.234696 systemd-networkd[1397]: cali7e8f39e235e: Link UP Mar 17 17:49:19.235196 systemd-networkd[1397]: cali7e8f39e235e: Gained carrier Mar 17 17:49:19.246170 containerd[1502]: time="2025-03-17T17:49:19.245542013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:19.246170 containerd[1502]: time="2025-03-17T17:49:19.245971661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:19.246170 containerd[1502]: time="2025-03-17T17:49:19.245988701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.247567 containerd[1502]: time="2025-03-17T17:49:19.246440469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:18.861 [INFO][4681] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:18.906 [INFO][4681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0 coredns-7db6d8ff4d- kube-system 18fbf695-ee31-4ad3-8e56-31fea597eadd 728 0 2025-03-17 17:48:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230-1-0-b-a06069b96b coredns-7db6d8ff4d-5t9xs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e8f39e235e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:18.906 [INFO][4681] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.049 [INFO][4728] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" HandleID="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Workload="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.102 [INFO][4728] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" HandleID="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Workload="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a4380), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230-1-0-b-a06069b96b", "pod":"coredns-7db6d8ff4d-5t9xs", "timestamp":"2025-03-17 17:49:19.049229357 +0000 UTC"}, Hostname:"ci-4230-1-0-b-a06069b96b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.103 [INFO][4728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.103 [INFO][4728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.104 [INFO][4728] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-1-0-b-a06069b96b' Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.112 [INFO][4728] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.138 [INFO][4728] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.172 [INFO][4728] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.175 [INFO][4728] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.179 [INFO][4728] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.179 [INFO][4728] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.182 [INFO][4728] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17 Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.202 [INFO][4728] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.219 [INFO][4728] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.194/26] block=192.168.32.192/26 handle="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.219 [INFO][4728] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.194/26] handle="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.219 [INFO][4728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:19.263052 containerd[1502]: 2025-03-17 17:49:19.221 [INFO][4728] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.194/26] IPv6=[] ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" HandleID="k8s-pod-network.95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Workload="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" Mar 17 17:49:19.264960 containerd[1502]: 2025-03-17 17:49:19.228 [INFO][4681] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"18fbf695-ee31-4ad3-8e56-31fea597eadd", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"", Pod:"coredns-7db6d8ff4d-5t9xs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e8f39e235e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.264960 containerd[1502]: 2025-03-17 17:49:19.229 [INFO][4681] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.194/32] ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" Mar 17 17:49:19.264960 containerd[1502]: 2025-03-17 17:49:19.229 [INFO][4681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e8f39e235e ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" Mar 17 17:49:19.264960 containerd[1502]: 2025-03-17 17:49:19.236 [INFO][4681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" Mar 17 17:49:19.264960 containerd[1502]: 2025-03-17 17:49:19.236 [INFO][4681] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"18fbf695-ee31-4ad3-8e56-31fea597eadd", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17", Pod:"coredns-7db6d8ff4d-5t9xs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e8f39e235e", MAC:"8e:83:9c:db:7a:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.264960 containerd[1502]: 2025-03-17 17:49:19.259 [INFO][4681] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5t9xs" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--5t9xs-eth0" Mar 17 17:49:19.286968 systemd[1]: Started cri-containerd-5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153.scope - libcontainer container 5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153. Mar 17 17:49:19.318112 systemd-networkd[1397]: cali21742482f7c: Link UP Mar 17 17:49:19.319571 systemd-networkd[1397]: cali21742482f7c: Gained carrier Mar 17 17:49:19.339320 containerd[1502]: time="2025-03-17T17:49:19.338840477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:19.339320 containerd[1502]: time="2025-03-17T17:49:19.339156723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:19.339320 containerd[1502]: time="2025-03-17T17:49:19.339175323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.339967 containerd[1502]: time="2025-03-17T17:49:19.339884175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:18.882 [INFO][4662] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:18.951 [INFO][4662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0 calico-apiserver-c5545ddd8- calico-apiserver a74fbbc9-5937-415b-8d68-ed4ea0db44e4 732 0 2025-03-17 17:49:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5545ddd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230-1-0-b-a06069b96b calico-apiserver-c5545ddd8-nd4n6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21742482f7c [] []}} ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:18.952 [INFO][4662] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.116 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" HandleID="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.176 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" HandleID="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002855b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230-1-0-b-a06069b96b", "pod":"calico-apiserver-c5545ddd8-nd4n6", "timestamp":"2025-03-17 17:49:19.116286764 +0000 UTC"}, Hostname:"ci-4230-1-0-b-a06069b96b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.176 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.219 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.219 [INFO][4738] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-1-0-b-a06069b96b' Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.224 [INFO][4738] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.235 [INFO][4738] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.259 [INFO][4738] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.268 [INFO][4738] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.279 [INFO][4738] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.279 [INFO][4738] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.285 [INFO][4738] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910 Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.292 [INFO][4738] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.302 [INFO][4738] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.195/26] block=192.168.32.192/26 handle="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.302 [INFO][4738] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.195/26] handle="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.302 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:19.352067 containerd[1502]: 2025-03-17 17:49:19.302 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.195/26] IPv6=[] ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" HandleID="k8s-pod-network.fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" Mar 17 17:49:19.353476 containerd[1502]: 2025-03-17 17:49:19.307 [INFO][4662] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0", GenerateName:"calico-apiserver-c5545ddd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a74fbbc9-5937-415b-8d68-ed4ea0db44e4", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5545ddd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"", Pod:"calico-apiserver-c5545ddd8-nd4n6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21742482f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.353476 containerd[1502]: 2025-03-17 17:49:19.307 [INFO][4662] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.195/32] ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" Mar 17 17:49:19.353476 containerd[1502]: 2025-03-17 17:49:19.307 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21742482f7c ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" Mar 17 17:49:19.353476 containerd[1502]: 2025-03-17 17:49:19.319 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" Mar 17 17:49:19.353476 containerd[1502]: 2025-03-17 17:49:19.322 [INFO][4662] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0", GenerateName:"calico-apiserver-c5545ddd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a74fbbc9-5937-415b-8d68-ed4ea0db44e4", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5545ddd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910", Pod:"calico-apiserver-c5545ddd8-nd4n6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21742482f7c", MAC:"fa:c2:bd:63:39:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.353476 containerd[1502]: 2025-03-17 17:49:19.342 [INFO][4662] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-nd4n6" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--nd4n6-eth0" Mar 17 17:49:19.381586 systemd[1]: Started cri-containerd-95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17.scope - libcontainer container 95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17. Mar 17 17:49:19.400002 containerd[1502]: time="2025-03-17T17:49:19.399549454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:19.400002 containerd[1502]: time="2025-03-17T17:49:19.399638295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:19.400002 containerd[1502]: time="2025-03-17T17:49:19.399654816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.403391 containerd[1502]: time="2025-03-17T17:49:19.400943278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.420750 systemd-networkd[1397]: cali53a9763c548: Link UP Mar 17 17:49:19.430383 systemd-networkd[1397]: cali53a9763c548: Gained carrier Mar 17 17:49:19.460598 systemd[1]: Started cri-containerd-fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910.scope - libcontainer container fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910. Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:18.915 [INFO][4673] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:18.969 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0 coredns-7db6d8ff4d- kube-system 89e31e82-fcf0-4b12-9877-940dcbb04dfb 730 0 2025-03-17 17:48:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230-1-0-b-a06069b96b coredns-7db6d8ff4d-9n8th eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53a9763c548 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:18.969 [INFO][4673] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.117 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" HandleID="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Workload="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.179 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" HandleID="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Workload="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000313d30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230-1-0-b-a06069b96b", "pod":"coredns-7db6d8ff4d-9n8th", "timestamp":"2025-03-17 17:49:19.117027497 +0000 UTC"}, Hostname:"ci-4230-1-0-b-a06069b96b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.179 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.302 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.303 [INFO][4741] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-1-0-b-a06069b96b' Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.308 [INFO][4741] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.316 [INFO][4741] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.331 [INFO][4741] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.334 [INFO][4741] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.342 [INFO][4741] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.343 [INFO][4741] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.354 [INFO][4741] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8 Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.365 [INFO][4741] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.395 [INFO][4741] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.196/26] block=192.168.32.192/26 handle="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.395 [INFO][4741] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.196/26] handle="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.395 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:19.465390 containerd[1502]: 2025-03-17 17:49:19.395 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.196/26] IPv6=[] ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" HandleID="k8s-pod-network.984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Workload="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" Mar 17 17:49:19.465954 containerd[1502]: 2025-03-17 17:49:19.400 [INFO][4673] cni-plugin/k8s.go 386: Populated endpoint ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89e31e82-fcf0-4b12-9877-940dcbb04dfb", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"", Pod:"coredns-7db6d8ff4d-9n8th", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53a9763c548", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.465954 containerd[1502]: 2025-03-17 17:49:19.400 [INFO][4673] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.196/32] ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" Mar 17 17:49:19.465954 containerd[1502]: 2025-03-17 17:49:19.400 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53a9763c548 ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" Mar 17 17:49:19.465954 containerd[1502]: 2025-03-17 17:49:19.430 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" Mar 17 17:49:19.465954 containerd[1502]: 2025-03-17 17:49:19.432 [INFO][4673] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89e31e82-fcf0-4b12-9877-940dcbb04dfb", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8", Pod:"coredns-7db6d8ff4d-9n8th", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53a9763c548", MAC:"52:31:cb:28:f2:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.465954 containerd[1502]: 2025-03-17 17:49:19.458 [INFO][4673] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n8th" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-coredns--7db6d8ff4d--9n8th-eth0" Mar 17 17:49:19.521495 containerd[1502]: time="2025-03-17T17:49:19.519789787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:19.521495 containerd[1502]: time="2025-03-17T17:49:19.519845748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:19.521495 containerd[1502]: time="2025-03-17T17:49:19.519856748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.521495 containerd[1502]: time="2025-03-17T17:49:19.519929749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.550684 systemd-networkd[1397]: calic597867e949: Link UP Mar 17 17:49:19.554571 systemd-networkd[1397]: calic597867e949: Gained carrier Mar 17 17:49:19.569590 systemd[1]: Started cri-containerd-984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8.scope - libcontainer container 984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8. Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:18.930 [INFO][4702] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:18.976 [INFO][4702] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0 calico-apiserver-c5545ddd8- calico-apiserver 74b237a4-f5d5-48d3-8f38-13c8c4872091 731 0 2025-03-17 17:49:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5545ddd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230-1-0-b-a06069b96b calico-apiserver-c5545ddd8-bblw2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic597867e949 [] []}} ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:18.976 [INFO][4702] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.137 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" HandleID="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.181 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" HandleID="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230-1-0-b-a06069b96b", "pod":"calico-apiserver-c5545ddd8-bblw2", "timestamp":"2025-03-17 17:49:19.137038565 +0000 UTC"}, Hostname:"ci-4230-1-0-b-a06069b96b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.182 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.395 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.395 [INFO][4749] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-1-0-b-a06069b96b' Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.409 [INFO][4749] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.454 [INFO][4749] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.479 [INFO][4749] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.485 [INFO][4749] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.496 [INFO][4749] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.496 [INFO][4749] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.512 [INFO][4749] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038 Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.523 [INFO][4749] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.538 [INFO][4749] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.197/26] block=192.168.32.192/26 handle="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.538 [INFO][4749] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.197/26] handle="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.538 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:19.587375 containerd[1502]: 2025-03-17 17:49:19.538 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.197/26] IPv6=[] ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" HandleID="k8s-pod-network.bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" Mar 17 17:49:19.588049 containerd[1502]: 2025-03-17 17:49:19.545 [INFO][4702] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0", GenerateName:"calico-apiserver-c5545ddd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"74b237a4-f5d5-48d3-8f38-13c8c4872091", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5545ddd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"", Pod:"calico-apiserver-c5545ddd8-bblw2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic597867e949", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.588049 containerd[1502]: 2025-03-17 17:49:19.545 [INFO][4702] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.197/32] ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" Mar 17 17:49:19.588049 containerd[1502]: 2025-03-17 17:49:19.545 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic597867e949 ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" Mar 17 17:49:19.588049 containerd[1502]: 2025-03-17 17:49:19.556 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" Mar 17 17:49:19.588049 containerd[1502]: 2025-03-17 17:49:19.563 [INFO][4702] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0", GenerateName:"calico-apiserver-c5545ddd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"74b237a4-f5d5-48d3-8f38-13c8c4872091", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5545ddd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038", Pod:"calico-apiserver-c5545ddd8-bblw2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic597867e949", MAC:"0a:49:e8:88:a0:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.588049 containerd[1502]: 2025-03-17 17:49:19.582 [INFO][4702] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038" Namespace="calico-apiserver" Pod="calico-apiserver-c5545ddd8-bblw2" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--apiserver--c5545ddd8--bblw2-eth0" Mar 17 17:49:19.594096 containerd[1502]: time="2025-03-17T17:49:19.593179104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5t9xs,Uid:18fbf695-ee31-4ad3-8e56-31fea597eadd,Namespace:kube-system,Attempt:5,} returns sandbox id \"95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17\"" Mar 17 17:49:19.620057 containerd[1502]: time="2025-03-17T17:49:19.618938152Z" level=info msg="CreateContainer within sandbox \"95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:49:19.654063 containerd[1502]: time="2025-03-17T17:49:19.653892401Z" level=info msg="CreateContainer within sandbox \"95289bcb9286bb760412d1ba15f02b844d5f0bf7a8c5f0d4c2bfbfe57bb7dd17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef90b58178e7940e34a7b232837c33ec31e585ad0f6348261bda724757ce00fe\"" Mar 17 17:49:19.666119 containerd[1502]: time="2025-03-17T17:49:19.665984891Z" level=info msg="StartContainer for \"ef90b58178e7940e34a7b232837c33ec31e585ad0f6348261bda724757ce00fe\"" Mar 17 17:49:19.694258 systemd-networkd[1397]: calibb6089131e6: Link UP Mar 17 17:49:19.694567 systemd-networkd[1397]: calibb6089131e6: Gained carrier Mar 17 17:49:19.720693 containerd[1502]: time="2025-03-17T17:49:19.720336317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb9b7ff55-f6s4k,Uid:447aab82-3c54-4fc9-a563-99b96e52f28a,Namespace:calico-system,Attempt:5,} returns sandbox id \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\"" Mar 17 17:49:19.720827 containerd[1502]: time="2025-03-17T17:49:19.682236414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:19.720827 containerd[1502]: time="2025-03-17T17:49:19.682286575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:19.720827 containerd[1502]: time="2025-03-17T17:49:19.682297975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.720827 containerd[1502]: time="2025-03-17T17:49:19.682384377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:18.956 [INFO][4692] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.010 [INFO][4692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0 csi-node-driver- calico-system 37c19387-6a1a-435e-b624-cd3e3f772523 612 0 2025-03-17 17:49:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230-1-0-b-a06069b96b csi-node-driver-ht6qb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibb6089131e6 [] []}} ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.011 [INFO][4692] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.188 [INFO][4757] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" HandleID="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Workload="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.227 [INFO][4757] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" HandleID="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Workload="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230-1-0-b-a06069b96b", "pod":"csi-node-driver-ht6qb", "timestamp":"2025-03-17 17:49:19.188722744 +0000 UTC"}, Hostname:"ci-4230-1-0-b-a06069b96b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.227 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.543 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.543 [INFO][4757] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-1-0-b-a06069b96b' Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.573 [INFO][4757] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.592 [INFO][4757] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.619 [INFO][4757] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.627 [INFO][4757] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.634 [INFO][4757] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.634 [INFO][4757] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.642 [INFO][4757] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6 Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.664 [INFO][4757] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.681 [INFO][4757] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.198/26] block=192.168.32.192/26 handle="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.681 [INFO][4757] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.198/26] handle="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.681 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:19.722789 containerd[1502]: 2025-03-17 17:49:19.682 [INFO][4757] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.198/26] IPv6=[] ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" HandleID="k8s-pod-network.03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Workload="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" Mar 17 17:49:19.723958 containerd[1502]: 2025-03-17 17:49:19.689 [INFO][4692] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37c19387-6a1a-435e-b624-cd3e3f772523", ResourceVersion:"612", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"", Pod:"csi-node-driver-ht6qb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb6089131e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.723958 containerd[1502]: 2025-03-17 17:49:19.689 [INFO][4692] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.198/32] ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" Mar 17 17:49:19.723958 containerd[1502]: 2025-03-17 17:49:19.690 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb6089131e6 ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" Mar 17 17:49:19.723958 containerd[1502]: 2025-03-17 17:49:19.694 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" Mar 17 17:49:19.723958 containerd[1502]: 2025-03-17 17:49:19.695 [INFO][4692] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37c19387-6a1a-435e-b624-cd3e3f772523", ResourceVersion:"612", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6", Pod:"csi-node-driver-ht6qb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb6089131e6", MAC:"b2:9e:f1:9a:9e:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:19.723958 containerd[1502]: 2025-03-17 17:49:19.706 [INFO][4692] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6" Namespace="calico-system" Pod="csi-node-driver-ht6qb" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-csi--node--driver--ht6qb-eth0" Mar 17 17:49:19.736171 containerd[1502]: time="2025-03-17T17:49:19.736001350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:49:19.742280 containerd[1502]: time="2025-03-17T17:49:19.742096816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n8th,Uid:89e31e82-fcf0-4b12-9877-940dcbb04dfb,Namespace:kube-system,Attempt:5,} returns sandbox id \"984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8\"" Mar 17 17:49:19.757422 containerd[1502]: time="2025-03-17T17:49:19.756832032Z" level=info msg="CreateContainer within sandbox \"984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:49:19.765563 systemd[1]: Started cri-containerd-ef90b58178e7940e34a7b232837c33ec31e585ad0f6348261bda724757ce00fe.scope - libcontainer container ef90b58178e7940e34a7b232837c33ec31e585ad0f6348261bda724757ce00fe. Mar 17 17:49:19.768987 kubelet[2846]: I0317 17:49:19.768939 2846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:19.793266 containerd[1502]: time="2025-03-17T17:49:19.793112784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-nd4n6,Uid:a74fbbc9-5937-415b-8d68-ed4ea0db44e4,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910\"" Mar 17 17:49:19.795599 systemd[1]: Started cri-containerd-bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038.scope - libcontainer container bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038. Mar 17 17:49:19.806135 containerd[1502]: time="2025-03-17T17:49:19.805742804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:19.806135 containerd[1502]: time="2025-03-17T17:49:19.805804125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:19.809465 containerd[1502]: time="2025-03-17T17:49:19.809199904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.809465 containerd[1502]: time="2025-03-17T17:49:19.809329706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:19.830900 containerd[1502]: time="2025-03-17T17:49:19.828450279Z" level=info msg="StartContainer for \"ef90b58178e7940e34a7b232837c33ec31e585ad0f6348261bda724757ce00fe\" returns successfully" Mar 17 17:49:19.834831 containerd[1502]: time="2025-03-17T17:49:19.834765749Z" level=info msg="CreateContainer within sandbox \"984912e8ac05e0de945072acf6b483aad9d78db5fbb30bb9026af04881452be8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75401f22f339f16847e40b190ebdc7c57b8c46a1916b0976b6ac763cdf04906f\"" Mar 17 17:49:19.837497 containerd[1502]: time="2025-03-17T17:49:19.835870808Z" level=info msg="StartContainer for \"75401f22f339f16847e40b190ebdc7c57b8c46a1916b0976b6ac763cdf04906f\"" Mar 17 17:49:19.853929 systemd[1]: Started cri-containerd-03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6.scope - libcontainer container 03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6. Mar 17 17:49:19.886067 systemd[1]: Started cri-containerd-75401f22f339f16847e40b190ebdc7c57b8c46a1916b0976b6ac763cdf04906f.scope - libcontainer container 75401f22f339f16847e40b190ebdc7c57b8c46a1916b0976b6ac763cdf04906f. Mar 17 17:49:19.889210 containerd[1502]: time="2025-03-17T17:49:19.888618206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5545ddd8-bblw2,Uid:74b237a4-f5d5-48d3-8f38-13c8c4872091,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038\"" Mar 17 17:49:19.921259 containerd[1502]: time="2025-03-17T17:49:19.919695747Z" level=info msg="StartContainer for \"75401f22f339f16847e40b190ebdc7c57b8c46a1916b0976b6ac763cdf04906f\" returns successfully" Mar 17 17:49:19.958547 containerd[1502]: time="2025-03-17T17:49:19.958502943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht6qb,Uid:37c19387-6a1a-435e-b624-cd3e3f772523,Namespace:calico-system,Attempt:6,} returns sandbox id \"03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6\"" Mar 17 17:49:20.677684 systemd-networkd[1397]: cali21742482f7c: Gained IPv6LL Mar 17 17:49:20.691469 kernel: bpftool[5285]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:49:20.806953 kubelet[2846]: I0317 17:49:20.806177 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5t9xs" podStartSLOduration=29.806161211 podStartE2EDuration="29.806161211s" podCreationTimestamp="2025-03-17 17:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:49:20.805756324 +0000 UTC m=+44.736032998" watchObservedRunningTime="2025-03-17 17:49:20.806161211 +0000 UTC m=+44.736437925" Mar 17 17:49:20.868543 systemd-networkd[1397]: cali53a9763c548: Gained IPv6LL Mar 17 17:49:20.868890 systemd-networkd[1397]: calic597867e949: Gained IPv6LL Mar 17 17:49:20.931878 systemd-networkd[1397]: vxlan.calico: Link UP Mar 17 17:49:20.931894 systemd-networkd[1397]: vxlan.calico: Gained carrier Mar 17 17:49:20.996679 systemd-networkd[1397]: calic1f3527c2f4: Gained IPv6LL Mar 17 17:49:21.126418 systemd-networkd[1397]: cali7e8f39e235e: Gained IPv6LL Mar 17 17:49:21.380884 systemd-networkd[1397]: calibb6089131e6: Gained IPv6LL Mar 17 17:49:22.489452 containerd[1502]: time="2025-03-17T17:49:22.489279291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:22.490796 containerd[1502]: time="2025-03-17T17:49:22.490734238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 17 17:49:22.492540 containerd[1502]: time="2025-03-17T17:49:22.491591534Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:22.494489 containerd[1502]: time="2025-03-17T17:49:22.494455106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:22.495233 containerd[1502]: time="2025-03-17T17:49:22.495193320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 2.758425797s" Mar 17 17:49:22.495311 containerd[1502]: time="2025-03-17T17:49:22.495241761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 17 17:49:22.496995 containerd[1502]: time="2025-03-17T17:49:22.496968193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:49:22.525950 containerd[1502]: time="2025-03-17T17:49:22.525900245Z" level=info msg="CreateContainer within sandbox \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:49:22.539035 containerd[1502]: time="2025-03-17T17:49:22.538994526Z" level=info msg="CreateContainer within sandbox \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\"" Mar 17 17:49:22.540190 containerd[1502]: time="2025-03-17T17:49:22.540160267Z" level=info msg="StartContainer for \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\"" Mar 17 17:49:22.579572 systemd[1]: Started cri-containerd-e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167.scope - libcontainer container e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167. Mar 17 17:49:22.619856 containerd[1502]: time="2025-03-17T17:49:22.619763732Z" level=info msg="StartContainer for \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\" returns successfully" Mar 17 17:49:22.829831 kubelet[2846]: I0317 17:49:22.829387 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9n8th" podStartSLOduration=31.829342708 podStartE2EDuration="31.829342708s" podCreationTimestamp="2025-03-17 17:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:49:20.847866031 +0000 UTC m=+44.778142745" watchObservedRunningTime="2025-03-17 17:49:22.829342708 +0000 UTC m=+46.759619462" Mar 17 17:49:22.852910 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Mar 17 17:49:23.902968 kubelet[2846]: I0317 17:49:23.901112 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fb9b7ff55-f6s4k" podStartSLOduration=18.140057264 podStartE2EDuration="20.901093387s" podCreationTimestamp="2025-03-17 17:49:03 +0000 UTC" firstStartedPulling="2025-03-17 17:49:19.735178056 +0000 UTC m=+43.665454770" lastFinishedPulling="2025-03-17 17:49:22.496214099 +0000 UTC m=+46.426490893" observedRunningTime="2025-03-17 17:49:22.830706533 +0000 UTC m=+46.760983287" watchObservedRunningTime="2025-03-17 17:49:23.901093387 +0000 UTC m=+47.831370101" Mar 17 17:49:24.517198 containerd[1502]: time="2025-03-17T17:49:24.517107388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:24.519000 containerd[1502]: time="2025-03-17T17:49:24.518785500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 17 17:49:24.520182 containerd[1502]: time="2025-03-17T17:49:24.520122245Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:24.524397 containerd[1502]: time="2025-03-17T17:49:24.523247305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:24.525062 containerd[1502]: time="2025-03-17T17:49:24.524660732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 2.027560216s" Mar 17 17:49:24.525062 containerd[1502]: time="2025-03-17T17:49:24.524707292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 17 17:49:24.526669 containerd[1502]: time="2025-03-17T17:49:24.526644009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:49:24.528506 containerd[1502]: time="2025-03-17T17:49:24.528366842Z" level=info msg="CreateContainer within sandbox \"fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:49:24.549409 containerd[1502]: time="2025-03-17T17:49:24.549155477Z" level=info msg="CreateContainer within sandbox \"fc2723244c236b4991397ae4cd6155085a3d62147020b1ae0a1bf3d635570910\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f058d5234252583018418bf9542cb9c1bd77f74de6a69611fb0b44e4158bccfd\"" Mar 17 17:49:24.551419 containerd[1502]: time="2025-03-17T17:49:24.551318798Z" level=info msg="StartContainer for \"f058d5234252583018418bf9542cb9c1bd77f74de6a69611fb0b44e4158bccfd\"" Mar 17 17:49:24.593604 systemd[1]: Started cri-containerd-f058d5234252583018418bf9542cb9c1bd77f74de6a69611fb0b44e4158bccfd.scope - libcontainer container f058d5234252583018418bf9542cb9c1bd77f74de6a69611fb0b44e4158bccfd. Mar 17 17:49:24.633075 containerd[1502]: time="2025-03-17T17:49:24.632847068Z" level=info msg="StartContainer for \"f058d5234252583018418bf9542cb9c1bd77f74de6a69611fb0b44e4158bccfd\" returns successfully" Mar 17 17:49:24.839029 kubelet[2846]: I0317 17:49:24.838835 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5545ddd8-nd4n6" podStartSLOduration=17.112506085 podStartE2EDuration="21.838812344s" podCreationTimestamp="2025-03-17 17:49:03 +0000 UTC" firstStartedPulling="2025-03-17 17:49:19.799619857 +0000 UTC m=+43.729896571" lastFinishedPulling="2025-03-17 17:49:24.525926116 +0000 UTC m=+48.456202830" observedRunningTime="2025-03-17 17:49:24.837084031 +0000 UTC m=+48.767360785" watchObservedRunningTime="2025-03-17 17:49:24.838812344 +0000 UTC m=+48.769089098" Mar 17 17:49:24.998312 containerd[1502]: time="2025-03-17T17:49:24.998242055Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:25.000894 containerd[1502]: time="2025-03-17T17:49:25.000844984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:49:25.003030 containerd[1502]: time="2025-03-17T17:49:25.002983545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 476.310255ms" Mar 17 17:49:25.003101 containerd[1502]: time="2025-03-17T17:49:25.003038707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 17 17:49:25.004511 containerd[1502]: time="2025-03-17T17:49:25.004474334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:49:25.010126 containerd[1502]: time="2025-03-17T17:49:25.010083923Z" level=info msg="CreateContainer within sandbox \"bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:49:25.031031 containerd[1502]: time="2025-03-17T17:49:25.030952605Z" level=info msg="CreateContainer within sandbox \"bce68a5ac969f74db69ac14cd2a186ef8b295da585c6cdad02f27a726b057038\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b300bcede748636907aa7c0e2dcbd410706d33310c7966f3907a3dead31280be\"" Mar 17 17:49:25.033401 containerd[1502]: time="2025-03-17T17:49:25.033341891Z" level=info msg="StartContainer for \"b300bcede748636907aa7c0e2dcbd410706d33310c7966f3907a3dead31280be\"" Mar 17 17:49:25.071765 systemd[1]: Started cri-containerd-b300bcede748636907aa7c0e2dcbd410706d33310c7966f3907a3dead31280be.scope - libcontainer container b300bcede748636907aa7c0e2dcbd410706d33310c7966f3907a3dead31280be. Mar 17 17:49:25.122877 containerd[1502]: time="2025-03-17T17:49:25.122761818Z" level=info msg="StartContainer for \"b300bcede748636907aa7c0e2dcbd410706d33310c7966f3907a3dead31280be\" returns successfully" Mar 17 17:49:25.850527 kubelet[2846]: I0317 17:49:25.850471 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5545ddd8-bblw2" podStartSLOduration=17.737615429 podStartE2EDuration="22.850453224s" podCreationTimestamp="2025-03-17 17:49:03 +0000 UTC" firstStartedPulling="2025-03-17 17:49:19.891333733 +0000 UTC m=+43.821610447" lastFinishedPulling="2025-03-17 17:49:25.004171568 +0000 UTC m=+48.934448242" observedRunningTime="2025-03-17 17:49:25.850350102 +0000 UTC m=+49.780626856" watchObservedRunningTime="2025-03-17 17:49:25.850453224 +0000 UTC m=+49.780729938" Mar 17 17:49:26.837425 kubelet[2846]: I0317 17:49:26.836708 2846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:26.837425 kubelet[2846]: I0317 17:49:26.836727 2846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:27.823532 kubelet[2846]: I0317 17:49:27.823398 2846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:28.081820 containerd[1502]: time="2025-03-17T17:49:28.080978384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:28.083754 containerd[1502]: time="2025-03-17T17:49:28.083614957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 17 17:49:28.090398 containerd[1502]: time="2025-03-17T17:49:28.090095487Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:28.093767 containerd[1502]: time="2025-03-17T17:49:28.093701640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:28.094456 containerd[1502]: time="2025-03-17T17:49:28.094418214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 3.089898399s" Mar 17 17:49:28.094456 containerd[1502]: time="2025-03-17T17:49:28.094453015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 17 17:49:28.098637 containerd[1502]: time="2025-03-17T17:49:28.098562498Z" level=info msg="CreateContainer within sandbox \"03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:49:28.117193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667582900.mount: Deactivated successfully. Mar 17 17:49:28.119520 containerd[1502]: time="2025-03-17T17:49:28.119475238Z" level=info msg="CreateContainer within sandbox \"03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6f7a3a1e7f152f11781058513b3653a501b586768cd762d4b5be20a6e4845712\"" Mar 17 17:49:28.120732 containerd[1502]: time="2025-03-17T17:49:28.120707223Z" level=info msg="StartContainer for \"6f7a3a1e7f152f11781058513b3653a501b586768cd762d4b5be20a6e4845712\"" Mar 17 17:49:28.154579 systemd[1]: Started cri-containerd-6f7a3a1e7f152f11781058513b3653a501b586768cd762d4b5be20a6e4845712.scope - libcontainer container 6f7a3a1e7f152f11781058513b3653a501b586768cd762d4b5be20a6e4845712. Mar 17 17:49:28.191056 containerd[1502]: time="2025-03-17T17:49:28.190978357Z" level=info msg="StartContainer for \"6f7a3a1e7f152f11781058513b3653a501b586768cd762d4b5be20a6e4845712\" returns successfully" Mar 17 17:49:28.193032 containerd[1502]: time="2025-03-17T17:49:28.192927357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:49:29.419328 containerd[1502]: time="2025-03-17T17:49:29.418999538Z" level=info msg="StopContainer for \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\" with timeout 300 (s)" Mar 17 17:49:29.423254 containerd[1502]: time="2025-03-17T17:49:29.420966218Z" level=info msg="Stop container \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\" with signal terminated" Mar 17 17:49:29.653515 containerd[1502]: time="2025-03-17T17:49:29.653472957Z" level=info msg="StopContainer for \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\" with timeout 30 (s)" Mar 17 17:49:29.656239 containerd[1502]: time="2025-03-17T17:49:29.654398736Z" level=info msg="Stop container \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\" with signal terminated" Mar 17 17:49:29.839409 systemd[1]: cri-containerd-e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167.scope: Deactivated successfully. Mar 17 17:49:29.919151 containerd[1502]: time="2025-03-17T17:49:29.914871804Z" level=info msg="shim disconnected" id=e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167 namespace=k8s.io Mar 17 17:49:29.919151 containerd[1502]: time="2025-03-17T17:49:29.914933006Z" level=warning msg="cleaning up after shim disconnected" id=e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167 namespace=k8s.io Mar 17 17:49:29.919151 containerd[1502]: time="2025-03-17T17:49:29.914941246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:29.917102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167-rootfs.mount: Deactivated successfully. Mar 17 17:49:29.938095 containerd[1502]: time="2025-03-17T17:49:29.936462084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:29.939681 containerd[1502]: time="2025-03-17T17:49:29.939433225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 17 17:49:29.941645 containerd[1502]: time="2025-03-17T17:49:29.941511507Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:29.945855 containerd[1502]: time="2025-03-17T17:49:29.945814315Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:49:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:49:29.949168 containerd[1502]: time="2025-03-17T17:49:29.949064901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:29.953921 containerd[1502]: time="2025-03-17T17:49:29.953863799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.760893802s" Mar 17 17:49:29.953921 containerd[1502]: time="2025-03-17T17:49:29.953913800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 17 17:49:29.959556 containerd[1502]: time="2025-03-17T17:49:29.959514234Z" level=info msg="CreateContainer within sandbox \"03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:49:30.036637 containerd[1502]: time="2025-03-17T17:49:30.035685395Z" level=info msg="StopContainer for \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\" returns successfully" Mar 17 17:49:30.042142 containerd[1502]: time="2025-03-17T17:49:30.037777558Z" level=info msg="StopPodSandbox for \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\"" Mar 17 17:49:30.042142 containerd[1502]: time="2025-03-17T17:49:30.037830280Z" level=info msg="Container to stop \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:30.045580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153-shm.mount: Deactivated successfully. Mar 17 17:49:30.054291 containerd[1502]: time="2025-03-17T17:49:30.053804009Z" level=info msg="StopContainer for \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\" with timeout 5 (s)" Mar 17 17:49:30.054980 containerd[1502]: time="2025-03-17T17:49:30.054723788Z" level=info msg="Stop container \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\" with signal terminated" Mar 17 17:49:30.060388 systemd[1]: cri-containerd-5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153.scope: Deactivated successfully. Mar 17 17:49:30.099173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153-rootfs.mount: Deactivated successfully. Mar 17 17:49:30.101266 containerd[1502]: time="2025-03-17T17:49:30.101194587Z" level=info msg="shim disconnected" id=5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153 namespace=k8s.io Mar 17 17:49:30.101514 containerd[1502]: time="2025-03-17T17:49:30.101343190Z" level=warning msg="cleaning up after shim disconnected" id=5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153 namespace=k8s.io Mar 17 17:49:30.101514 containerd[1502]: time="2025-03-17T17:49:30.101369870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:30.110262 containerd[1502]: time="2025-03-17T17:49:30.110168652Z" level=info msg="CreateContainer within sandbox \"03d04cd3892626731d56b6bb5ee11090d6341cbdbc543e0c5fff1f1648b6d4c6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"18ba2f4c65fc072239d70bbddd598eca43f25a7f567a1ee9c4bad7784169df7e\"" Mar 17 17:49:30.113421 containerd[1502]: time="2025-03-17T17:49:30.112772666Z" level=info msg="StartContainer for \"18ba2f4c65fc072239d70bbddd598eca43f25a7f567a1ee9c4bad7784169df7e\"" Mar 17 17:49:30.120774 systemd[1]: cri-containerd-a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0.scope: Deactivated successfully. Mar 17 17:49:30.122427 systemd[1]: cri-containerd-a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0.scope: Consumed 1.421s CPU time, 153.2M memory peak, 660K written to disk. Mar 17 17:49:30.167380 containerd[1502]: time="2025-03-17T17:49:30.163988522Z" level=info msg="shim disconnected" id=a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0 namespace=k8s.io Mar 17 17:49:30.167380 containerd[1502]: time="2025-03-17T17:49:30.164039603Z" level=warning msg="cleaning up after shim disconnected" id=a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0 namespace=k8s.io Mar 17 17:49:30.167380 containerd[1502]: time="2025-03-17T17:49:30.164047643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:30.169726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0-rootfs.mount: Deactivated successfully. Mar 17 17:49:30.206799 systemd[1]: Started cri-containerd-18ba2f4c65fc072239d70bbddd598eca43f25a7f567a1ee9c4bad7784169df7e.scope - libcontainer container 18ba2f4c65fc072239d70bbddd598eca43f25a7f567a1ee9c4bad7784169df7e. Mar 17 17:49:30.249271 containerd[1502]: time="2025-03-17T17:49:30.249125359Z" level=info msg="StopContainer for \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\" returns successfully" Mar 17 17:49:30.251898 containerd[1502]: time="2025-03-17T17:49:30.251856175Z" level=info msg="StopPodSandbox for \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\"" Mar 17 17:49:30.252030 containerd[1502]: time="2025-03-17T17:49:30.251910416Z" level=info msg="Container to stop \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:30.252030 containerd[1502]: time="2025-03-17T17:49:30.251923816Z" level=info msg="Container to stop \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:30.252030 containerd[1502]: time="2025-03-17T17:49:30.251933817Z" level=info msg="Container to stop \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:30.270554 systemd-networkd[1397]: calic1f3527c2f4: Link DOWN Mar 17 17:49:30.270560 systemd-networkd[1397]: calic1f3527c2f4: Lost carrier Mar 17 17:49:30.271583 systemd[1]: cri-containerd-8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f.scope: Deactivated successfully. Mar 17 17:49:30.315455 containerd[1502]: time="2025-03-17T17:49:30.315087479Z" level=info msg="StartContainer for \"18ba2f4c65fc072239d70bbddd598eca43f25a7f567a1ee9c4bad7784169df7e\" returns successfully" Mar 17 17:49:30.333770 containerd[1502]: time="2025-03-17T17:49:30.333562701Z" level=info msg="shim disconnected" id=8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f namespace=k8s.io Mar 17 17:49:30.333770 containerd[1502]: time="2025-03-17T17:49:30.333620582Z" level=warning msg="cleaning up after shim disconnected" id=8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f namespace=k8s.io Mar 17 17:49:30.333770 containerd[1502]: time="2025-03-17T17:49:30.333628382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:30.368054 containerd[1502]: time="2025-03-17T17:49:30.367582202Z" level=info msg="TearDown network for sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" successfully" Mar 17 17:49:30.368054 containerd[1502]: time="2025-03-17T17:49:30.367619323Z" level=info msg="StopPodSandbox for \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" returns successfully" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.262 [INFO][5759] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.262 [INFO][5759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" iface="eth0" netns="/var/run/netns/cni-d924150a-6d27-f000-5480-5d37e655ba5c" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.264 [INFO][5759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" iface="eth0" netns="/var/run/netns/cni-d924150a-6d27-f000-5480-5d37e655ba5c" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.279 [INFO][5759] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" after=15.624322ms iface="eth0" netns="/var/run/netns/cni-d924150a-6d27-f000-5480-5d37e655ba5c" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.279 [INFO][5759] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.279 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.342 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.343 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.343 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.413 [INFO][5788] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.413 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.427 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:30.434713 containerd[1502]: 2025-03-17 17:49:30.432 [INFO][5759] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:30.437477 kubelet[2846]: I0317 17:49:30.436635 2846 topology_manager.go:215] "Topology Admit Handler" podUID="b21e8cd6-62f8-47cc-a955-d636bd4adff2" podNamespace="calico-system" podName="calico-node-9qjpj" Mar 17 17:49:30.437477 kubelet[2846]: E0317 17:49:30.436770 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c486f367-e8f1-4495-8302-4a805d81fa28" containerName="flexvol-driver" Mar 17 17:49:30.437477 kubelet[2846]: E0317 17:49:30.436780 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c486f367-e8f1-4495-8302-4a805d81fa28" containerName="install-cni" Mar 17 17:49:30.437477 kubelet[2846]: E0317 17:49:30.436786 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c486f367-e8f1-4495-8302-4a805d81fa28" containerName="calico-node" Mar 17 17:49:30.437477 kubelet[2846]: I0317 17:49:30.436816 2846 memory_manager.go:354] "RemoveStaleState removing state" podUID="c486f367-e8f1-4495-8302-4a805d81fa28" containerName="calico-node" Mar 17 17:49:30.439283 containerd[1502]: time="2025-03-17T17:49:30.438002375Z" level=info msg="TearDown network for sandbox \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" successfully" Mar 17 17:49:30.439283 containerd[1502]: time="2025-03-17T17:49:30.438037336Z" level=info msg="StopPodSandbox for \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" returns successfully" Mar 17 17:49:30.440882 containerd[1502]: time="2025-03-17T17:49:30.440378984Z" level=info msg="StopPodSandbox for \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\"" Mar 17 17:49:30.440882 containerd[1502]: time="2025-03-17T17:49:30.440504787Z" level=info msg="TearDown network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" successfully" Mar 17 17:49:30.440882 containerd[1502]: time="2025-03-17T17:49:30.440515027Z" level=info msg="StopPodSandbox for \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" returns successfully" Mar 17 17:49:30.442602 containerd[1502]: time="2025-03-17T17:49:30.442322544Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\"" Mar 17 17:49:30.442602 containerd[1502]: time="2025-03-17T17:49:30.442456667Z" level=info msg="TearDown network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" successfully" Mar 17 17:49:30.442602 containerd[1502]: time="2025-03-17T17:49:30.442467707Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" returns successfully" Mar 17 17:49:30.444064 containerd[1502]: time="2025-03-17T17:49:30.443912257Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" Mar 17 17:49:30.444064 containerd[1502]: time="2025-03-17T17:49:30.444000019Z" level=info msg="TearDown network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" successfully" Mar 17 17:49:30.444064 containerd[1502]: time="2025-03-17T17:49:30.444010099Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" returns successfully" Mar 17 17:49:30.446580 containerd[1502]: time="2025-03-17T17:49:30.445855737Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:30.446580 containerd[1502]: time="2025-03-17T17:49:30.445943539Z" level=info msg="TearDown network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" successfully" Mar 17 17:49:30.446580 containerd[1502]: time="2025-03-17T17:49:30.445953419Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" returns successfully" Mar 17 17:49:30.447407 containerd[1502]: time="2025-03-17T17:49:30.447152284Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:30.447407 containerd[1502]: time="2025-03-17T17:49:30.447278447Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:30.447407 containerd[1502]: time="2025-03-17T17:49:30.447291247Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:30.454099 systemd[1]: Created slice kubepods-besteffort-podb21e8cd6_62f8_47cc_a955_d636bd4adff2.slice - libcontainer container kubepods-besteffort-podb21e8cd6_62f8_47cc_a955_d636bd4adff2.slice. Mar 17 17:49:30.497262 kubelet[2846]: I0317 17:49:30.496438 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86hvn\" (UniqueName: \"kubernetes.io/projected/c486f367-e8f1-4495-8302-4a805d81fa28-kube-api-access-86hvn\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497262 kubelet[2846]: I0317 17:49:30.496481 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c486f367-e8f1-4495-8302-4a805d81fa28-tigera-ca-bundle\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497564 kubelet[2846]: I0317 17:49:30.497529 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-flexvol-driver-host\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497612 kubelet[2846]: I0317 17:49:30.497576 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-bin-dir\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497612 kubelet[2846]: I0317 17:49:30.497593 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-xtables-lock\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497612 kubelet[2846]: I0317 17:49:30.497608 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-log-dir\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497701 kubelet[2846]: I0317 17:49:30.497628 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-policysync\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497701 kubelet[2846]: I0317 17:49:30.497666 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-run-calico\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497804 kubelet[2846]: I0317 17:49:30.497777 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c486f367-e8f1-4495-8302-4a805d81fa28-node-certs\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497846 kubelet[2846]: I0317 17:49:30.497805 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-lib-modules\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497846 kubelet[2846]: I0317 17:49:30.497819 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-net-dir\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.497846 kubelet[2846]: I0317 17:49:30.497840 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-lib-calico\") pod \"c486f367-e8f1-4495-8302-4a805d81fa28\" (UID: \"c486f367-e8f1-4495-8302-4a805d81fa28\") " Mar 17 17:49:30.498078 kubelet[2846]: I0317 17:49:30.497903 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-flexvol-driver-host\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498078 kubelet[2846]: I0317 17:49:30.497930 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-policysync\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498078 kubelet[2846]: I0317 17:49:30.497951 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z8gb\" (UniqueName: \"kubernetes.io/projected/b21e8cd6-62f8-47cc-a955-d636bd4adff2-kube-api-access-6z8gb\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498078 kubelet[2846]: I0317 17:49:30.497968 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-xtables-lock\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498078 kubelet[2846]: I0317 17:49:30.497988 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-var-lib-calico\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498215 kubelet[2846]: I0317 17:49:30.498028 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-cni-bin-dir\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498215 kubelet[2846]: I0317 17:49:30.498046 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21e8cd6-62f8-47cc-a955-d636bd4adff2-tigera-ca-bundle\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498215 kubelet[2846]: I0317 17:49:30.498063 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-cni-net-dir\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498215 kubelet[2846]: I0317 17:49:30.498087 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-cni-log-dir\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.498215 kubelet[2846]: I0317 17:49:30.498104 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b21e8cd6-62f8-47cc-a955-d636bd4adff2-node-certs\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.499555 kubelet[2846]: I0317 17:49:30.498128 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-var-run-calico\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.499555 kubelet[2846]: I0317 17:49:30.498152 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b21e8cd6-62f8-47cc-a955-d636bd4adff2-lib-modules\") pod \"calico-node-9qjpj\" (UID: \"b21e8cd6-62f8-47cc-a955-d636bd4adff2\") " pod="calico-system/calico-node-9qjpj" Mar 17 17:49:30.521708 kubelet[2846]: I0317 17:49:30.520420 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.521708 kubelet[2846]: I0317 17:49:30.520505 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.521708 kubelet[2846]: I0317 17:49:30.520556 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.521708 kubelet[2846]: I0317 17:49:30.520583 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.521708 kubelet[2846]: I0317 17:49:30.520614 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-policysync" (OuterVolumeSpecName: "policysync") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.532749 kubelet[2846]: I0317 17:49:30.531018 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.533631 kubelet[2846]: I0317 17:49:30.533554 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.533716 kubelet[2846]: I0317 17:49:30.533662 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.533716 kubelet[2846]: I0317 17:49:30.533685 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:49:30.536997 kubelet[2846]: I0317 17:49:30.536946 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c486f367-e8f1-4495-8302-4a805d81fa28-kube-api-access-86hvn" (OuterVolumeSpecName: "kube-api-access-86hvn") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "kube-api-access-86hvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:49:30.537119 kubelet[2846]: I0317 17:49:30.537055 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c486f367-e8f1-4495-8302-4a805d81fa28-node-certs" (OuterVolumeSpecName: "node-certs") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:49:30.539578 kubelet[2846]: I0317 17:49:30.539526 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c486f367-e8f1-4495-8302-4a805d81fa28-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c486f367-e8f1-4495-8302-4a805d81fa28" (UID: "c486f367-e8f1-4495-8302-4a805d81fa28"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:49:30.599702 kubelet[2846]: I0317 17:49:30.599346 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447aab82-3c54-4fc9-a563-99b96e52f28a-tigera-ca-bundle\") pod \"447aab82-3c54-4fc9-a563-99b96e52f28a\" (UID: \"447aab82-3c54-4fc9-a563-99b96e52f28a\") " Mar 17 17:49:30.599702 kubelet[2846]: I0317 17:49:30.599458 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w59j\" (UniqueName: \"kubernetes.io/projected/447aab82-3c54-4fc9-a563-99b96e52f28a-kube-api-access-5w59j\") pod \"447aab82-3c54-4fc9-a563-99b96e52f28a\" (UID: \"447aab82-3c54-4fc9-a563-99b96e52f28a\") " Mar 17 17:49:30.600117 kubelet[2846]: I0317 17:49:30.599921 2846 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-run-calico\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600117 kubelet[2846]: I0317 17:49:30.599945 2846 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c486f367-e8f1-4495-8302-4a805d81fa28-node-certs\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600117 kubelet[2846]: I0317 17:49:30.599955 2846 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-var-lib-calico\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600117 kubelet[2846]: I0317 17:49:30.599972 2846 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-lib-modules\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600117 kubelet[2846]: I0317 17:49:30.600085 2846 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-net-dir\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600117 kubelet[2846]: I0317 17:49:30.600096 2846 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-86hvn\" (UniqueName: \"kubernetes.io/projected/c486f367-e8f1-4495-8302-4a805d81fa28-kube-api-access-86hvn\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600617 kubelet[2846]: I0317 17:49:30.600107 2846 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-xtables-lock\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600617 kubelet[2846]: I0317 17:49:30.600320 2846 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-log-dir\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600617 kubelet[2846]: I0317 17:49:30.600331 2846 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c486f367-e8f1-4495-8302-4a805d81fa28-tigera-ca-bundle\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600617 kubelet[2846]: I0317 17:49:30.600339 2846 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-flexvol-driver-host\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600617 kubelet[2846]: I0317 17:49:30.600347 2846 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-cni-bin-dir\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.600773 kubelet[2846]: I0317 17:49:30.600638 2846 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c486f367-e8f1-4495-8302-4a805d81fa28-policysync\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.612329 kubelet[2846]: I0317 17:49:30.612257 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/447aab82-3c54-4fc9-a563-99b96e52f28a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "447aab82-3c54-4fc9-a563-99b96e52f28a" (UID: "447aab82-3c54-4fc9-a563-99b96e52f28a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:49:30.612516 kubelet[2846]: I0317 17:49:30.612401 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447aab82-3c54-4fc9-a563-99b96e52f28a-kube-api-access-5w59j" (OuterVolumeSpecName: "kube-api-access-5w59j") pod "447aab82-3c54-4fc9-a563-99b96e52f28a" (UID: "447aab82-3c54-4fc9-a563-99b96e52f28a"). InnerVolumeSpecName "kube-api-access-5w59j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:49:30.701840 kubelet[2846]: I0317 17:49:30.701692 2846 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447aab82-3c54-4fc9-a563-99b96e52f28a-tigera-ca-bundle\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.701840 kubelet[2846]: I0317 17:49:30.701734 2846 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5w59j\" (UniqueName: \"kubernetes.io/projected/447aab82-3c54-4fc9-a563-99b96e52f28a-kube-api-access-5w59j\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:30.762021 containerd[1502]: time="2025-03-17T17:49:30.761625412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9qjpj,Uid:b21e8cd6-62f8-47cc-a955-d636bd4adff2,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:30.795595 containerd[1502]: time="2025-03-17T17:49:30.795477070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:30.795841 containerd[1502]: time="2025-03-17T17:49:30.795765676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:30.796064 containerd[1502]: time="2025-03-17T17:49:30.796011921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:30.796539 containerd[1502]: time="2025-03-17T17:49:30.796284687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:30.817605 systemd[1]: Started cri-containerd-d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914.scope - libcontainer container d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914. Mar 17 17:49:30.858345 containerd[1502]: time="2025-03-17T17:49:30.858185124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9qjpj,Uid:b21e8cd6-62f8-47cc-a955-d636bd4adff2,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914\"" Mar 17 17:49:30.872641 containerd[1502]: time="2025-03-17T17:49:30.872473058Z" level=info msg="CreateContainer within sandbox \"d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:49:30.888802 kubelet[2846]: I0317 17:49:30.888643 2846 scope.go:117] "RemoveContainer" containerID="e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167" Mar 17 17:49:30.894395 containerd[1502]: time="2025-03-17T17:49:30.893408170Z" level=info msg="RemoveContainer for \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\"" Mar 17 17:49:30.900413 systemd[1]: Removed slice kubepods-besteffort-pod447aab82_3c54_4fc9_a563_99b96e52f28a.slice - libcontainer container kubepods-besteffort-pod447aab82_3c54_4fc9_a563_99b96e52f28a.slice. Mar 17 17:49:30.905401 containerd[1502]: time="2025-03-17T17:49:30.902864005Z" level=info msg="RemoveContainer for \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\" returns successfully" Mar 17 17:49:30.906380 kubelet[2846]: I0317 17:49:30.905506 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ht6qb" podStartSLOduration=17.911130574 podStartE2EDuration="27.905487539s" podCreationTimestamp="2025-03-17 17:49:03 +0000 UTC" firstStartedPulling="2025-03-17 17:49:19.961469794 +0000 UTC m=+43.891746548" lastFinishedPulling="2025-03-17 17:49:29.955826799 +0000 UTC m=+53.886103513" observedRunningTime="2025-03-17 17:49:30.900392594 +0000 UTC m=+54.830669268" watchObservedRunningTime="2025-03-17 17:49:30.905487539 +0000 UTC m=+54.835764253" Mar 17 17:49:30.912972 kubelet[2846]: I0317 17:49:30.911190 2846 scope.go:117] "RemoveContainer" containerID="e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167" Mar 17 17:49:30.913123 containerd[1502]: time="2025-03-17T17:49:30.912885452Z" level=error msg="ContainerStatus for \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\": not found" Mar 17 17:49:30.913163 kubelet[2846]: E0317 17:49:30.913070 2846 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\": not found" containerID="e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167" Mar 17 17:49:30.913198 kubelet[2846]: I0317 17:49:30.913110 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167"} err="failed to get container status \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1c2a7e4ddbf63bcc8eb40a35f879ddb9d05e24294a2c37882dbb428532fa167\": not found" Mar 17 17:49:30.929378 kubelet[2846]: I0317 17:49:30.926529 2846 scope.go:117] "RemoveContainer" containerID="a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0" Mar 17 17:49:30.934486 systemd[1]: var-lib-kubelet-pods-447aab82\x2d3c54\x2d4fc9\x2da563\x2d99b96e52f28a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Mar 17 17:49:30.934598 systemd[1]: run-netns-cni\x2dd924150a\x2d6d27\x2df000\x2d5480\x2d5d37e655ba5c.mount: Deactivated successfully. Mar 17 17:49:30.934654 systemd[1]: var-lib-kubelet-pods-c486f367\x2de8f1\x2d4495\x2d8302\x2d4a805d81fa28-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Mar 17 17:49:30.934716 systemd[1]: var-lib-kubelet-pods-447aab82\x2d3c54\x2d4fc9\x2da563\x2d99b96e52f28a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5w59j.mount: Deactivated successfully. Mar 17 17:49:30.934781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f-rootfs.mount: Deactivated successfully. Mar 17 17:49:30.934834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f-shm.mount: Deactivated successfully. Mar 17 17:49:30.934896 systemd[1]: var-lib-kubelet-pods-c486f367\x2de8f1\x2d4495\x2d8302\x2d4a805d81fa28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86hvn.mount: Deactivated successfully. Mar 17 17:49:30.934947 systemd[1]: var-lib-kubelet-pods-c486f367\x2de8f1\x2d4495\x2d8302\x2d4a805d81fa28-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 17 17:49:30.943510 containerd[1502]: time="2025-03-17T17:49:30.938631823Z" level=info msg="RemoveContainer for \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\"" Mar 17 17:49:30.948584 systemd[1]: cri-containerd-f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600.scope: Deactivated successfully. Mar 17 17:49:30.952827 systemd[1]: Removed slice kubepods-besteffort-podc486f367_e8f1_4495_8302_4a805d81fa28.slice - libcontainer container kubepods-besteffort-podc486f367_e8f1_4495_8302_4a805d81fa28.slice. Mar 17 17:49:30.952917 systemd[1]: kubepods-besteffort-podc486f367_e8f1_4495_8302_4a805d81fa28.slice: Consumed 1.948s CPU time, 283.9M memory peak, 157.1M written to disk. Mar 17 17:49:31.022845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600-rootfs.mount: Deactivated successfully. Mar 17 17:49:31.026535 containerd[1502]: time="2025-03-17T17:49:31.026471441Z" level=info msg="shim disconnected" id=f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600 namespace=k8s.io Mar 17 17:49:31.026671 containerd[1502]: time="2025-03-17T17:49:31.026644205Z" level=warning msg="cleaning up after shim disconnected" id=f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600 namespace=k8s.io Mar 17 17:49:31.026784 containerd[1502]: time="2025-03-17T17:49:31.026738607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:31.028938 containerd[1502]: time="2025-03-17T17:49:31.026670686Z" level=info msg="RemoveContainer for \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\" returns successfully" Mar 17 17:49:31.031761 kubelet[2846]: I0317 17:49:31.031726 2846 scope.go:117] "RemoveContainer" containerID="2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374" Mar 17 17:49:31.035522 containerd[1502]: time="2025-03-17T17:49:31.035485550Z" level=info msg="RemoveContainer for \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\"" Mar 17 17:49:31.046476 containerd[1502]: time="2025-03-17T17:49:31.046316576Z" level=info msg="RemoveContainer for \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\" returns successfully" Mar 17 17:49:31.047667 kubelet[2846]: I0317 17:49:31.047628 2846 scope.go:117] "RemoveContainer" containerID="05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222" Mar 17 17:49:31.053485 containerd[1502]: time="2025-03-17T17:49:31.053443404Z" level=info msg="RemoveContainer for \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\"" Mar 17 17:49:31.058009 containerd[1502]: time="2025-03-17T17:49:31.057388047Z" level=info msg="CreateContainer within sandbox \"d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877\"" Mar 17 17:49:31.060728 containerd[1502]: time="2025-03-17T17:49:31.060675835Z" level=info msg="StartContainer for \"7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877\"" Mar 17 17:49:31.066126 containerd[1502]: time="2025-03-17T17:49:31.066030907Z" level=info msg="RemoveContainer for \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\" returns successfully" Mar 17 17:49:31.067106 kubelet[2846]: I0317 17:49:31.067078 2846 scope.go:117] "RemoveContainer" containerID="a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0" Mar 17 17:49:31.067757 containerd[1502]: time="2025-03-17T17:49:31.067545299Z" level=error msg="ContainerStatus for \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\": not found" Mar 17 17:49:31.068127 kubelet[2846]: E0317 17:49:31.068097 2846 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\": not found" containerID="a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0" Mar 17 17:49:31.068205 kubelet[2846]: I0317 17:49:31.068129 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0"} err="failed to get container status \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7d6756e81dab0321c8e2527859d760ccd19fc2089b181b9ba6dde3bef65f6a0\": not found" Mar 17 17:49:31.068205 kubelet[2846]: I0317 17:49:31.068151 2846 scope.go:117] "RemoveContainer" containerID="2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374" Mar 17 17:49:31.069760 containerd[1502]: time="2025-03-17T17:49:31.068411677Z" level=error msg="ContainerStatus for \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\": not found" Mar 17 17:49:31.070871 kubelet[2846]: E0317 17:49:31.070510 2846 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\": not found" containerID="2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374" Mar 17 17:49:31.070871 kubelet[2846]: I0317 17:49:31.070543 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374"} err="failed to get container status \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bbbc81eb9d34aca3f3e94d4f1c111bc41df0e21c46018248e46f49796963374\": not found" Mar 17 17:49:31.070871 kubelet[2846]: I0317 17:49:31.070566 2846 scope.go:117] "RemoveContainer" containerID="05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222" Mar 17 17:49:31.071001 containerd[1502]: time="2025-03-17T17:49:31.070787406Z" level=error msg="ContainerStatus for \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\": not found" Mar 17 17:49:31.071030 kubelet[2846]: E0317 17:49:31.070954 2846 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\": not found" containerID="05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222" Mar 17 17:49:31.071030 kubelet[2846]: I0317 17:49:31.070977 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222"} err="failed to get container status \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\": rpc error: code = NotFound desc = an error occurred when try to find container \"05124861928ee22ab534c7f4da9ce10c17bbbe702b9903681b9b3d79acb19222\": not found" Mar 17 17:49:31.071657 containerd[1502]: time="2025-03-17T17:49:31.071609424Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:49:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:49:31.093018 containerd[1502]: time="2025-03-17T17:49:31.092932789Z" level=info msg="StopContainer for \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\" returns successfully" Mar 17 17:49:31.096376 containerd[1502]: time="2025-03-17T17:49:31.093680604Z" level=info msg="StopPodSandbox for \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\"" Mar 17 17:49:31.096376 containerd[1502]: time="2025-03-17T17:49:31.094191695Z" level=info msg="Container to stop \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:31.102806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c-shm.mount: Deactivated successfully. Mar 17 17:49:31.138660 systemd[1]: Started cri-containerd-7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877.scope - libcontainer container 7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877. Mar 17 17:49:31.138926 systemd[1]: cri-containerd-93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c.scope: Deactivated successfully. Mar 17 17:49:31.185079 containerd[1502]: time="2025-03-17T17:49:31.185018071Z" level=info msg="shim disconnected" id=93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c namespace=k8s.io Mar 17 17:49:31.185428 containerd[1502]: time="2025-03-17T17:49:31.185405719Z" level=warning msg="cleaning up after shim disconnected" id=93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c namespace=k8s.io Mar 17 17:49:31.186427 containerd[1502]: time="2025-03-17T17:49:31.186009531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:31.229294 containerd[1502]: time="2025-03-17T17:49:31.228583340Z" level=info msg="TearDown network for sandbox \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" successfully" Mar 17 17:49:31.229294 containerd[1502]: time="2025-03-17T17:49:31.228621261Z" level=info msg="StopPodSandbox for \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" returns successfully" Mar 17 17:49:31.292613 containerd[1502]: time="2025-03-17T17:49:31.292410872Z" level=info msg="StartContainer for \"7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877\" returns successfully" Mar 17 17:49:31.306325 kubelet[2846]: I0317 17:49:31.306280 2846 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:49:31.306655 kubelet[2846]: I0317 17:49:31.306621 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ce1946e2-ace6-4124-9136-04c527b79ec8-typha-certs\") pod \"ce1946e2-ace6-4124-9136-04c527b79ec8\" (UID: \"ce1946e2-ace6-4124-9136-04c527b79ec8\") " Mar 17 17:49:31.306730 kubelet[2846]: I0317 17:49:31.306716 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9c75\" (UniqueName: \"kubernetes.io/projected/ce1946e2-ace6-4124-9136-04c527b79ec8-kube-api-access-h9c75\") pod \"ce1946e2-ace6-4124-9136-04c527b79ec8\" (UID: \"ce1946e2-ace6-4124-9136-04c527b79ec8\") " Mar 17 17:49:31.306759 kubelet[2846]: I0317 17:49:31.306745 2846 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1946e2-ace6-4124-9136-04c527b79ec8-tigera-ca-bundle\") pod \"ce1946e2-ace6-4124-9136-04c527b79ec8\" (UID: \"ce1946e2-ace6-4124-9136-04c527b79ec8\") " Mar 17 17:49:31.312321 kubelet[2846]: I0317 17:49:31.312273 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1946e2-ace6-4124-9136-04c527b79ec8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ce1946e2-ace6-4124-9136-04c527b79ec8" (UID: "ce1946e2-ace6-4124-9136-04c527b79ec8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:49:31.318160 kubelet[2846]: I0317 17:49:31.318105 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1946e2-ace6-4124-9136-04c527b79ec8-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "ce1946e2-ace6-4124-9136-04c527b79ec8" (UID: "ce1946e2-ace6-4124-9136-04c527b79ec8"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:49:31.322104 kubelet[2846]: I0317 17:49:31.321847 2846 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:49:31.322343 kubelet[2846]: I0317 17:49:31.322308 2846 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1946e2-ace6-4124-9136-04c527b79ec8-kube-api-access-h9c75" (OuterVolumeSpecName: "kube-api-access-h9c75") pod "ce1946e2-ace6-4124-9136-04c527b79ec8" (UID: "ce1946e2-ace6-4124-9136-04c527b79ec8"). InnerVolumeSpecName "kube-api-access-h9c75". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:49:31.329983 systemd[1]: cri-containerd-7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877.scope: Deactivated successfully. Mar 17 17:49:31.331692 systemd[1]: cri-containerd-7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877.scope: Consumed 29ms CPU time, 7.6M memory peak, 6.2M written to disk. Mar 17 17:49:31.371400 containerd[1502]: time="2025-03-17T17:49:31.370774788Z" level=info msg="shim disconnected" id=7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877 namespace=k8s.io Mar 17 17:49:31.371400 containerd[1502]: time="2025-03-17T17:49:31.370828909Z" level=warning msg="cleaning up after shim disconnected" id=7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877 namespace=k8s.io Mar 17 17:49:31.371400 containerd[1502]: time="2025-03-17T17:49:31.370837349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:31.407345 kubelet[2846]: I0317 17:49:31.407240 2846 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h9c75\" (UniqueName: \"kubernetes.io/projected/ce1946e2-ace6-4124-9136-04c527b79ec8-kube-api-access-h9c75\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:31.407345 kubelet[2846]: I0317 17:49:31.407307 2846 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1946e2-ace6-4124-9136-04c527b79ec8-tigera-ca-bundle\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:31.407345 kubelet[2846]: I0317 17:49:31.407322 2846 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ce1946e2-ace6-4124-9136-04c527b79ec8-typha-certs\") on node \"ci-4230-1-0-b-a06069b96b\" DevicePath \"\"" Mar 17 17:49:31.918751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f8c101f7f6007b61e9f288423d9cd7abcabd9aa48da163dd21b28e53d3f0877-rootfs.mount: Deactivated successfully. Mar 17 17:49:31.918904 systemd[1]: var-lib-kubelet-pods-ce1946e2\x2dace6\x2d4124\x2d9136\x2d04c527b79ec8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Mar 17 17:49:31.918992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c-rootfs.mount: Deactivated successfully. Mar 17 17:49:31.919068 systemd[1]: var-lib-kubelet-pods-ce1946e2\x2dace6\x2d4124\x2d9136\x2d04c527b79ec8-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Mar 17 17:49:31.919166 systemd[1]: var-lib-kubelet-pods-ce1946e2\x2dace6\x2d4124\x2d9136\x2d04c527b79ec8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9c75.mount: Deactivated successfully. Mar 17 17:49:31.931193 kubelet[2846]: I0317 17:49:31.931055 2846 scope.go:117] "RemoveContainer" containerID="f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600" Mar 17 17:49:31.934851 containerd[1502]: time="2025-03-17T17:49:31.934499233Z" level=info msg="RemoveContainer for \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\"" Mar 17 17:49:31.939138 systemd[1]: Removed slice kubepods-besteffort-podce1946e2_ace6_4124_9136_04c527b79ec8.slice - libcontainer container kubepods-besteffort-podce1946e2_ace6_4124_9136_04c527b79ec8.slice. Mar 17 17:49:31.943284 containerd[1502]: time="2025-03-17T17:49:31.942836927Z" level=info msg="RemoveContainer for \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\" returns successfully" Mar 17 17:49:31.943982 kubelet[2846]: I0317 17:49:31.943658 2846 scope.go:117] "RemoveContainer" containerID="f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600" Mar 17 17:49:31.946268 containerd[1502]: time="2025-03-17T17:49:31.945475782Z" level=error msg="ContainerStatus for \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\": not found" Mar 17 17:49:31.947157 kubelet[2846]: E0317 17:49:31.947101 2846 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\": not found" containerID="f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600" Mar 17 17:49:31.947157 kubelet[2846]: I0317 17:49:31.947132 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600"} err="failed to get container status \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7e4d9aaf23bd3917ee72d6a47aecaf450f9ab99adcbb3e6eefe695ecf749600\": not found" Mar 17 17:49:31.951031 containerd[1502]: time="2025-03-17T17:49:31.950797333Z" level=info msg="CreateContainer within sandbox \"d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:49:31.977274 containerd[1502]: time="2025-03-17T17:49:31.977127483Z" level=info msg="CreateContainer within sandbox \"d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9\"" Mar 17 17:49:31.980404 containerd[1502]: time="2025-03-17T17:49:31.979530813Z" level=info msg="StartContainer for \"48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9\"" Mar 17 17:49:32.025586 systemd[1]: Started cri-containerd-48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9.scope - libcontainer container 48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9. Mar 17 17:49:32.124646 containerd[1502]: time="2025-03-17T17:49:32.124118459Z" level=info msg="StartContainer for \"48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9\" returns successfully" Mar 17 17:49:32.186672 kubelet[2846]: I0317 17:49:32.186548 2846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" path="/var/lib/kubelet/pods/447aab82-3c54-4fc9-a563-99b96e52f28a/volumes" Mar 17 17:49:32.188379 kubelet[2846]: I0317 17:49:32.187397 2846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c486f367-e8f1-4495-8302-4a805d81fa28" path="/var/lib/kubelet/pods/c486f367-e8f1-4495-8302-4a805d81fa28/volumes" Mar 17 17:49:32.188379 kubelet[2846]: I0317 17:49:32.188318 2846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1946e2-ace6-4124-9136-04c527b79ec8" path="/var/lib/kubelet/pods/ce1946e2-ace6-4124-9136-04c527b79ec8/volumes" Mar 17 17:49:32.838007 containerd[1502]: time="2025-03-17T17:49:32.837961045Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/10-calico.conflist\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Mar 17 17:49:32.841897 systemd[1]: cri-containerd-48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9.scope: Deactivated successfully. Mar 17 17:49:32.842444 systemd[1]: cri-containerd-48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9.scope: Consumed 631ms CPU time, 54M memory peak, 34.3M read from disk. Mar 17 17:49:32.867933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9-rootfs.mount: Deactivated successfully. Mar 17 17:49:32.880801 containerd[1502]: time="2025-03-17T17:49:32.880502143Z" level=info msg="shim disconnected" id=48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9 namespace=k8s.io Mar 17 17:49:32.880801 containerd[1502]: time="2025-03-17T17:49:32.880580824Z" level=warning msg="cleaning up after shim disconnected" id=48ec0d8a105dcc6b051e26013cdfd183c8545418d269452f26ddc2e75b551fe9 namespace=k8s.io Mar 17 17:49:32.880801 containerd[1502]: time="2025-03-17T17:49:32.880596105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:32.980288 containerd[1502]: time="2025-03-17T17:49:32.979625915Z" level=info msg="CreateContainer within sandbox \"d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:49:32.995908 containerd[1502]: time="2025-03-17T17:49:32.995573891Z" level=info msg="CreateContainer within sandbox \"d6ed52febf6957f59b2eb24e8fdabd3477842eee869ca2738a4956e15376c914\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e74537b1cafcd8586554e9778dfd762b870bd1d271610c6a827395d88a13545a\"" Mar 17 17:49:32.997467 containerd[1502]: time="2025-03-17T17:49:32.997391810Z" level=info msg="StartContainer for \"e74537b1cafcd8586554e9778dfd762b870bd1d271610c6a827395d88a13545a\"" Mar 17 17:49:33.041833 systemd[1]: Started cri-containerd-e74537b1cafcd8586554e9778dfd762b870bd1d271610c6a827395d88a13545a.scope - libcontainer container e74537b1cafcd8586554e9778dfd762b870bd1d271610c6a827395d88a13545a. Mar 17 17:49:33.111061 containerd[1502]: time="2025-03-17T17:49:33.110834189Z" level=info msg="StartContainer for \"e74537b1cafcd8586554e9778dfd762b870bd1d271610c6a827395d88a13545a\" returns successfully" Mar 17 17:49:33.238942 kubelet[2846]: I0317 17:49:33.238322 2846 topology_manager.go:215] "Topology Admit Handler" podUID="381e38b0-aba5-42e0-a072-302c9bbcfda9" podNamespace="calico-system" podName="calico-kube-controllers-5488cf45bd-5sl5c" Mar 17 17:49:33.238942 kubelet[2846]: E0317 17:49:33.238508 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce1946e2-ace6-4124-9136-04c527b79ec8" containerName="calico-typha" Mar 17 17:49:33.238942 kubelet[2846]: E0317 17:49:33.238524 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" containerName="calico-kube-controllers" Mar 17 17:49:33.238942 kubelet[2846]: I0317 17:49:33.238566 2846 memory_manager.go:354] "RemoveStaleState removing state" podUID="447aab82-3c54-4fc9-a563-99b96e52f28a" containerName="calico-kube-controllers" Mar 17 17:49:33.238942 kubelet[2846]: I0317 17:49:33.238577 2846 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1946e2-ace6-4124-9136-04c527b79ec8" containerName="calico-typha" Mar 17 17:49:33.252282 systemd[1]: Created slice kubepods-besteffort-pod381e38b0_aba5_42e0_a072_302c9bbcfda9.slice - libcontainer container kubepods-besteffort-pod381e38b0_aba5_42e0_a072_302c9bbcfda9.slice. Mar 17 17:49:33.324551 kubelet[2846]: I0317 17:49:33.324503 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381e38b0-aba5-42e0-a072-302c9bbcfda9-tigera-ca-bundle\") pod \"calico-kube-controllers-5488cf45bd-5sl5c\" (UID: \"381e38b0-aba5-42e0-a072-302c9bbcfda9\") " pod="calico-system/calico-kube-controllers-5488cf45bd-5sl5c" Mar 17 17:49:33.324700 kubelet[2846]: I0317 17:49:33.324563 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2kp\" (UniqueName: \"kubernetes.io/projected/381e38b0-aba5-42e0-a072-302c9bbcfda9-kube-api-access-ms2kp\") pod \"calico-kube-controllers-5488cf45bd-5sl5c\" (UID: \"381e38b0-aba5-42e0-a072-302c9bbcfda9\") " pod="calico-system/calico-kube-controllers-5488cf45bd-5sl5c" Mar 17 17:49:33.381896 kubelet[2846]: I0317 17:49:33.381735 2846 topology_manager.go:215] "Topology Admit Handler" podUID="67a3db9a-7330-41d4-85cb-d8e13ff04d50" podNamespace="calico-system" podName="calico-typha-c848dc694-5nwnn" Mar 17 17:49:33.392169 systemd[1]: Created slice kubepods-besteffort-pod67a3db9a_7330_41d4_85cb_d8e13ff04d50.slice - libcontainer container kubepods-besteffort-pod67a3db9a_7330_41d4_85cb_d8e13ff04d50.slice. Mar 17 17:49:33.425647 kubelet[2846]: I0317 17:49:33.425199 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zj5\" (UniqueName: \"kubernetes.io/projected/67a3db9a-7330-41d4-85cb-d8e13ff04d50-kube-api-access-c6zj5\") pod \"calico-typha-c848dc694-5nwnn\" (UID: \"67a3db9a-7330-41d4-85cb-d8e13ff04d50\") " pod="calico-system/calico-typha-c848dc694-5nwnn" Mar 17 17:49:33.425647 kubelet[2846]: I0317 17:49:33.425256 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/67a3db9a-7330-41d4-85cb-d8e13ff04d50-typha-certs\") pod \"calico-typha-c848dc694-5nwnn\" (UID: \"67a3db9a-7330-41d4-85cb-d8e13ff04d50\") " pod="calico-system/calico-typha-c848dc694-5nwnn" Mar 17 17:49:33.425647 kubelet[2846]: I0317 17:49:33.425299 2846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a3db9a-7330-41d4-85cb-d8e13ff04d50-tigera-ca-bundle\") pod \"calico-typha-c848dc694-5nwnn\" (UID: \"67a3db9a-7330-41d4-85cb-d8e13ff04d50\") " pod="calico-system/calico-typha-c848dc694-5nwnn" Mar 17 17:49:33.557634 containerd[1502]: time="2025-03-17T17:49:33.557565958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5488cf45bd-5sl5c,Uid:381e38b0-aba5-42e0-a072-302c9bbcfda9,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:33.698174 containerd[1502]: time="2025-03-17T17:49:33.697747068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c848dc694-5nwnn,Uid:67a3db9a-7330-41d4-85cb-d8e13ff04d50,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:33.722418 containerd[1502]: time="2025-03-17T17:49:33.722014226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:33.722418 containerd[1502]: time="2025-03-17T17:49:33.722109148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:33.722418 containerd[1502]: time="2025-03-17T17:49:33.722126388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:33.722418 containerd[1502]: time="2025-03-17T17:49:33.722219790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:33.731348 systemd-networkd[1397]: cali38791195943: Link UP Mar 17 17:49:33.732549 systemd-networkd[1397]: cali38791195943: Gained carrier Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.611 [INFO][6128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0 calico-kube-controllers-5488cf45bd- calico-system 381e38b0-aba5-42e0-a072-302c9bbcfda9 1045 0 2025-03-17 17:49:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5488cf45bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230-1-0-b-a06069b96b calico-kube-controllers-5488cf45bd-5sl5c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali38791195943 [] []}} ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.611 [INFO][6128] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.651 [INFO][6140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" HandleID="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.676 [INFO][6140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" HandleID="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230-1-0-b-a06069b96b", "pod":"calico-kube-controllers-5488cf45bd-5sl5c", "timestamp":"2025-03-17 17:49:33.651473961 +0000 UTC"}, Hostname:"ci-4230-1-0-b-a06069b96b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.676 [INFO][6140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.676 [INFO][6140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.677 [INFO][6140] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-1-0-b-a06069b96b' Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.679 [INFO][6140] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.684 [INFO][6140] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.690 [INFO][6140] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.693 [INFO][6140] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.698 [INFO][6140] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.699 [INFO][6140] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.701 [INFO][6140] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042 Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.706 [INFO][6140] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.719 [INFO][6140] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.199/26] block=192.168.32.192/26 handle="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.719 [INFO][6140] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.199/26] handle="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" host="ci-4230-1-0-b-a06069b96b" Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.720 [INFO][6140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:33.754242 containerd[1502]: 2025-03-17 17:49:33.720 [INFO][6140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.199/26] IPv6=[] ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" HandleID="k8s-pod-network.1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" Mar 17 17:49:33.755001 containerd[1502]: 2025-03-17 17:49:33.725 [INFO][6128] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0", GenerateName:"calico-kube-controllers-5488cf45bd-", Namespace:"calico-system", SelfLink:"", UID:"381e38b0-aba5-42e0-a072-302c9bbcfda9", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5488cf45bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"", Pod:"calico-kube-controllers-5488cf45bd-5sl5c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38791195943", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:33.755001 containerd[1502]: 2025-03-17 17:49:33.725 [INFO][6128] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.199/32] ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" Mar 17 17:49:33.755001 containerd[1502]: 2025-03-17 17:49:33.725 [INFO][6128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38791195943 ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" Mar 17 17:49:33.755001 containerd[1502]: 2025-03-17 17:49:33.734 [INFO][6128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" Mar 17 17:49:33.755001 containerd[1502]: 2025-03-17 17:49:33.734 [INFO][6128] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0", GenerateName:"calico-kube-controllers-5488cf45bd-", Namespace:"calico-system", SelfLink:"", UID:"381e38b0-aba5-42e0-a072-302c9bbcfda9", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5488cf45bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-1-0-b-a06069b96b", ContainerID:"1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042", Pod:"calico-kube-controllers-5488cf45bd-5sl5c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38791195943", MAC:"6e:1a:25:e7:c6:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:33.755001 containerd[1502]: 2025-03-17 17:49:33.747 [INFO][6128] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042" Namespace="calico-system" Pod="calico-kube-controllers-5488cf45bd-5sl5c" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--5488cf45bd--5sl5c-eth0" Mar 17 17:49:33.759635 systemd[1]: Started cri-containerd-0c06e581a8d2648b5303e6815ca07853b85ad823a0f7bea79ec7d1d8ee45a753.scope - libcontainer container 0c06e581a8d2648b5303e6815ca07853b85ad823a0f7bea79ec7d1d8ee45a753. Mar 17 17:49:33.793934 containerd[1502]: time="2025-03-17T17:49:33.793512351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:33.793934 containerd[1502]: time="2025-03-17T17:49:33.793574072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:33.793934 containerd[1502]: time="2025-03-17T17:49:33.793589513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:33.793934 containerd[1502]: time="2025-03-17T17:49:33.793667794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:33.811225 containerd[1502]: time="2025-03-17T17:49:33.810872841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c848dc694-5nwnn,Uid:67a3db9a-7330-41d4-85cb-d8e13ff04d50,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c06e581a8d2648b5303e6815ca07853b85ad823a0f7bea79ec7d1d8ee45a753\"" Mar 17 17:49:33.823672 systemd[1]: Started cri-containerd-1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042.scope - libcontainer container 1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042. Mar 17 17:49:33.831934 containerd[1502]: time="2025-03-17T17:49:33.831740247Z" level=info msg="CreateContainer within sandbox \"0c06e581a8d2648b5303e6815ca07853b85ad823a0f7bea79ec7d1d8ee45a753\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:49:33.848484 containerd[1502]: time="2025-03-17T17:49:33.848436883Z" level=info msg="CreateContainer within sandbox \"0c06e581a8d2648b5303e6815ca07853b85ad823a0f7bea79ec7d1d8ee45a753\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"37e61135ff27cb9c25331b5e17e18469d46c7432faa6b79e45e69b0f5a7f4609\"" Mar 17 17:49:33.850459 containerd[1502]: time="2025-03-17T17:49:33.849592627Z" level=info msg="StartContainer for \"37e61135ff27cb9c25331b5e17e18469d46c7432faa6b79e45e69b0f5a7f4609\"" Mar 17 17:49:33.879455 containerd[1502]: time="2025-03-17T17:49:33.879401063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5488cf45bd-5sl5c,Uid:381e38b0-aba5-42e0-a072-302c9bbcfda9,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042\"" Mar 17 17:49:33.899665 containerd[1502]: time="2025-03-17T17:49:33.899625975Z" level=info msg="CreateContainer within sandbox \"1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:49:33.914566 systemd[1]: Started cri-containerd-37e61135ff27cb9c25331b5e17e18469d46c7432faa6b79e45e69b0f5a7f4609.scope - libcontainer container 37e61135ff27cb9c25331b5e17e18469d46c7432faa6b79e45e69b0f5a7f4609. Mar 17 17:49:33.946142 containerd[1502]: time="2025-03-17T17:49:33.946087486Z" level=info msg="CreateContainer within sandbox \"1d928c29c0e246533a0c60f22ef8b55a59353fe22369f1e36f615208846e9042\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3e38cd4d55abcf77b3f7afad257f2e26e9391e732f1054775fe756199b0fe226\"" Mar 17 17:49:33.947931 containerd[1502]: time="2025-03-17T17:49:33.947886644Z" level=info msg="StartContainer for \"3e38cd4d55abcf77b3f7afad257f2e26e9391e732f1054775fe756199b0fe226\"" Mar 17 17:49:34.030480 systemd[1]: Started cri-containerd-3e38cd4d55abcf77b3f7afad257f2e26e9391e732f1054775fe756199b0fe226.scope - libcontainer container 3e38cd4d55abcf77b3f7afad257f2e26e9391e732f1054775fe756199b0fe226. Mar 17 17:49:34.041313 kubelet[2846]: I0317 17:49:34.041222 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9qjpj" podStartSLOduration=4.041190243 podStartE2EDuration="4.041190243s" podCreationTimestamp="2025-03-17 17:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:49:34.03642118 +0000 UTC m=+57.966697894" watchObservedRunningTime="2025-03-17 17:49:34.041190243 +0000 UTC m=+57.971466957" Mar 17 17:49:34.132815 containerd[1502]: time="2025-03-17T17:49:34.132334487Z" level=info msg="StartContainer for \"3e38cd4d55abcf77b3f7afad257f2e26e9391e732f1054775fe756199b0fe226\" returns successfully" Mar 17 17:49:34.182939 containerd[1502]: time="2025-03-17T17:49:34.182623091Z" level=info msg="StartContainer for \"37e61135ff27cb9c25331b5e17e18469d46c7432faa6b79e45e69b0f5a7f4609\" returns successfully" Mar 17 17:49:35.065787 kubelet[2846]: I0317 17:49:35.064743 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c848dc694-5nwnn" podStartSLOduration=6.064723474 podStartE2EDuration="6.064723474s" podCreationTimestamp="2025-03-17 17:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:49:35.03558412 +0000 UTC m=+58.965860834" watchObservedRunningTime="2025-03-17 17:49:35.064723474 +0000 UTC m=+58.995000268" Mar 17 17:49:35.268618 systemd-networkd[1397]: cali38791195943: Gained IPv6LL Mar 17 17:49:36.199278 containerd[1502]: time="2025-03-17T17:49:36.199233845Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\"" Mar 17 17:49:36.200918 containerd[1502]: time="2025-03-17T17:49:36.200407191Z" level=info msg="TearDown network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" successfully" Mar 17 17:49:36.200918 containerd[1502]: time="2025-03-17T17:49:36.200462392Z" level=info msg="StopPodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" returns successfully" Mar 17 17:49:36.204648 containerd[1502]: time="2025-03-17T17:49:36.204278716Z" level=info msg="RemovePodSandbox for \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\"" Mar 17 17:49:36.210855 containerd[1502]: time="2025-03-17T17:49:36.210790899Z" level=info msg="Forcibly stopping sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\"" Mar 17 17:49:36.211012 containerd[1502]: time="2025-03-17T17:49:36.210969183Z" level=info msg="TearDown network for sandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" successfully" Mar 17 17:49:36.219365 containerd[1502]: time="2025-03-17T17:49:36.219244884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.219580 containerd[1502]: time="2025-03-17T17:49:36.219404608Z" level=info msg="RemovePodSandbox \"1fb008f5b22e00510dc0b3ed903d5e3736ef623bbcb9637090abd9787615afe3\" returns successfully" Mar 17 17:49:36.220738 containerd[1502]: time="2025-03-17T17:49:36.220704836Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:36.220848 containerd[1502]: time="2025-03-17T17:49:36.220817119Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:36.220848 containerd[1502]: time="2025-03-17T17:49:36.220832399Z" level=info msg="StopPodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:36.221323 containerd[1502]: time="2025-03-17T17:49:36.221265849Z" level=info msg="RemovePodSandbox for \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:36.221323 containerd[1502]: time="2025-03-17T17:49:36.221311170Z" level=info msg="Forcibly stopping sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\"" Mar 17 17:49:36.221452 containerd[1502]: time="2025-03-17T17:49:36.221409532Z" level=info msg="TearDown network for sandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" successfully" Mar 17 17:49:36.226185 containerd[1502]: time="2025-03-17T17:49:36.226132636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.226344 containerd[1502]: time="2025-03-17T17:49:36.226211757Z" level=info msg="RemovePodSandbox \"0c971b353eda8908c0ed82a8f74aa2bcd914d1e10cc726cec37af3121fef3115\" returns successfully" Mar 17 17:49:36.226934 containerd[1502]: time="2025-03-17T17:49:36.226905453Z" level=info msg="StopPodSandbox for \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\"" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.321 [WARNING][6608] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.321 [INFO][6608] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.321 [INFO][6608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" iface="eth0" netns="" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.321 [INFO][6608] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.321 [INFO][6608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.361 [INFO][6628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.362 [INFO][6628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.362 [INFO][6628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.372 [WARNING][6628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.372 [INFO][6628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.374 [INFO][6628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:36.379665 containerd[1502]: 2025-03-17 17:49:36.376 [INFO][6608] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.380061 containerd[1502]: time="2025-03-17T17:49:36.379806332Z" level=info msg="TearDown network for sandbox \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" successfully" Mar 17 17:49:36.380061 containerd[1502]: time="2025-03-17T17:49:36.379839052Z" level=info msg="StopPodSandbox for \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" returns successfully" Mar 17 17:49:36.380609 containerd[1502]: time="2025-03-17T17:49:36.380575229Z" level=info msg="RemovePodSandbox for \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\"" Mar 17 17:49:36.380609 containerd[1502]: time="2025-03-17T17:49:36.380609789Z" level=info msg="Forcibly stopping sandbox \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\"" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.445 [WARNING][6649] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" WorkloadEndpoint="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.446 [INFO][6649] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.446 [INFO][6649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" iface="eth0" netns="" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.446 [INFO][6649] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.446 [INFO][6649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.490 [INFO][6660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.492 [INFO][6660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.492 [INFO][6660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.503 [WARNING][6660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.503 [INFO][6660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" HandleID="k8s-pod-network.5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Workload="ci--4230--1--0--b--a06069b96b-k8s-calico--kube--controllers--6fb9b7ff55--f6s4k-eth0" Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.506 [INFO][6660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:36.509835 containerd[1502]: 2025-03-17 17:49:36.508 [INFO][6649] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153" Mar 17 17:49:36.510278 containerd[1502]: time="2025-03-17T17:49:36.509891589Z" level=info msg="TearDown network for sandbox \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" successfully" Mar 17 17:49:36.515785 containerd[1502]: time="2025-03-17T17:49:36.515155465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.515785 containerd[1502]: time="2025-03-17T17:49:36.515235587Z" level=info msg="RemovePodSandbox \"5297321f454afcd2a2ded6030e607409af6b9a6d7173431f3d237040a5eb0153\" returns successfully" Mar 17 17:49:36.516638 containerd[1502]: time="2025-03-17T17:49:36.516598457Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:36.516726 containerd[1502]: time="2025-03-17T17:49:36.516710699Z" level=info msg="TearDown network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" successfully" Mar 17 17:49:36.516726 containerd[1502]: time="2025-03-17T17:49:36.516722780Z" level=info msg="StopPodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" returns successfully" Mar 17 17:49:36.518090 containerd[1502]: time="2025-03-17T17:49:36.517049627Z" level=info msg="RemovePodSandbox for \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:36.518090 containerd[1502]: time="2025-03-17T17:49:36.517081587Z" level=info msg="Forcibly stopping sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\"" Mar 17 17:49:36.518090 containerd[1502]: time="2025-03-17T17:49:36.517144549Z" level=info msg="TearDown network for sandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" successfully" Mar 17 17:49:36.521347 containerd[1502]: time="2025-03-17T17:49:36.521262519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.521347 containerd[1502]: time="2025-03-17T17:49:36.521369242Z" level=info msg="RemovePodSandbox \"aad2d88a1412fe2affa71c39685bc3a5dc18f605404321267a5e5fb4a31e590c\" returns successfully" Mar 17 17:49:36.522614 containerd[1502]: time="2025-03-17T17:49:36.521958855Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" Mar 17 17:49:36.522614 containerd[1502]: time="2025-03-17T17:49:36.522238461Z" level=info msg="TearDown network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" successfully" Mar 17 17:49:36.522614 containerd[1502]: time="2025-03-17T17:49:36.522250541Z" level=info msg="StopPodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" returns successfully" Mar 17 17:49:36.522956 containerd[1502]: time="2025-03-17T17:49:36.522899115Z" level=info msg="RemovePodSandbox for \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" Mar 17 17:49:36.523077 containerd[1502]: time="2025-03-17T17:49:36.523037438Z" level=info msg="Forcibly stopping sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\"" Mar 17 17:49:36.523178 containerd[1502]: time="2025-03-17T17:49:36.523114560Z" level=info msg="TearDown network for sandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" successfully" Mar 17 17:49:36.531388 containerd[1502]: time="2025-03-17T17:49:36.531130736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.531388 containerd[1502]: time="2025-03-17T17:49:36.531220098Z" level=info msg="RemovePodSandbox \"e596eff12fd821022449740737aea9436068b36d3ad8721bd70904dc3b441d42\" returns successfully" Mar 17 17:49:36.532002 containerd[1502]: time="2025-03-17T17:49:36.531959994Z" level=info msg="StopPodSandbox for \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\"" Mar 17 17:49:36.532065 containerd[1502]: time="2025-03-17T17:49:36.532057556Z" level=info msg="TearDown network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" successfully" Mar 17 17:49:36.532110 containerd[1502]: time="2025-03-17T17:49:36.532067837Z" level=info msg="StopPodSandbox for \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" returns successfully" Mar 17 17:49:36.533115 containerd[1502]: time="2025-03-17T17:49:36.533085099Z" level=info msg="RemovePodSandbox for \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\"" Mar 17 17:49:36.533115 containerd[1502]: time="2025-03-17T17:49:36.533116020Z" level=info msg="Forcibly stopping sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\"" Mar 17 17:49:36.533193 containerd[1502]: time="2025-03-17T17:49:36.533182661Z" level=info msg="TearDown network for sandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" successfully" Mar 17 17:49:36.541229 containerd[1502]: time="2025-03-17T17:49:36.541081275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.541229 containerd[1502]: time="2025-03-17T17:49:36.541193157Z" level=info msg="RemovePodSandbox \"d747b7d40af24ef0294d48de324d91f11888d87426af4d2e4d02e1016875ece4\" returns successfully" Mar 17 17:49:36.543286 containerd[1502]: time="2025-03-17T17:49:36.543176161Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:36.543776 containerd[1502]: time="2025-03-17T17:49:36.543462447Z" level=info msg="TearDown network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" successfully" Mar 17 17:49:36.543776 containerd[1502]: time="2025-03-17T17:49:36.543477367Z" level=info msg="StopPodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" returns successfully" Mar 17 17:49:36.543998 containerd[1502]: time="2025-03-17T17:49:36.543971938Z" level=info msg="RemovePodSandbox for \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:36.544041 containerd[1502]: time="2025-03-17T17:49:36.544003739Z" level=info msg="Forcibly stopping sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\"" Mar 17 17:49:36.544191 containerd[1502]: time="2025-03-17T17:49:36.544173223Z" level=info msg="TearDown network for sandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" successfully" Mar 17 17:49:36.549172 containerd[1502]: time="2025-03-17T17:49:36.549074130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.549172 containerd[1502]: time="2025-03-17T17:49:36.549152372Z" level=info msg="RemovePodSandbox \"5049d9a44327ceb26ec0e830ca6381dc56ceee5d50988f917f7a4786d52c565a\" returns successfully" Mar 17 17:49:36.551476 containerd[1502]: time="2025-03-17T17:49:36.550529242Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" Mar 17 17:49:36.551476 containerd[1502]: time="2025-03-17T17:49:36.550694806Z" level=info msg="TearDown network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" successfully" Mar 17 17:49:36.551476 containerd[1502]: time="2025-03-17T17:49:36.550714926Z" level=info msg="StopPodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" returns successfully" Mar 17 17:49:36.552122 containerd[1502]: time="2025-03-17T17:49:36.552081276Z" level=info msg="RemovePodSandbox for \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" Mar 17 17:49:36.552381 containerd[1502]: time="2025-03-17T17:49:36.552263600Z" level=info msg="Forcibly stopping sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\"" Mar 17 17:49:36.552820 containerd[1502]: time="2025-03-17T17:49:36.552681850Z" level=info msg="TearDown network for sandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" successfully" Mar 17 17:49:36.561267 containerd[1502]: time="2025-03-17T17:49:36.560806708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.561267 containerd[1502]: time="2025-03-17T17:49:36.560895310Z" level=info msg="RemovePodSandbox \"cac693edb575616753343d824a9457a99de9856c5a33c416bc4647adb099ad5e\" returns successfully" Mar 17 17:49:36.561702 containerd[1502]: time="2025-03-17T17:49:36.561658967Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\"" Mar 17 17:49:36.561927 containerd[1502]: time="2025-03-17T17:49:36.561818050Z" level=info msg="TearDown network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" successfully" Mar 17 17:49:36.561927 containerd[1502]: time="2025-03-17T17:49:36.561841291Z" level=info msg="StopPodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" returns successfully" Mar 17 17:49:36.562675 containerd[1502]: time="2025-03-17T17:49:36.562593707Z" level=info msg="RemovePodSandbox for \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\"" Mar 17 17:49:36.562675 containerd[1502]: time="2025-03-17T17:49:36.562642508Z" level=info msg="Forcibly stopping sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\"" Mar 17 17:49:36.562882 containerd[1502]: time="2025-03-17T17:49:36.562782671Z" level=info msg="TearDown network for sandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" successfully" Mar 17 17:49:36.568647 containerd[1502]: time="2025-03-17T17:49:36.568576999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.568762 containerd[1502]: time="2025-03-17T17:49:36.568702961Z" level=info msg="RemovePodSandbox \"aa9e013f5ca04d1ebbb93c9070f637b76b54ede3e7247e7df3a244937591e7fd\" returns successfully" Mar 17 17:49:36.570047 containerd[1502]: time="2025-03-17T17:49:36.569198492Z" level=info msg="StopPodSandbox for \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\"" Mar 17 17:49:36.570047 containerd[1502]: time="2025-03-17T17:49:36.569313855Z" level=info msg="TearDown network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" successfully" Mar 17 17:49:36.570047 containerd[1502]: time="2025-03-17T17:49:36.569325895Z" level=info msg="StopPodSandbox for \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" returns successfully" Mar 17 17:49:36.575593 containerd[1502]: time="2025-03-17T17:49:36.575538952Z" level=info msg="RemovePodSandbox for \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\"" Mar 17 17:49:36.575593 containerd[1502]: time="2025-03-17T17:49:36.575586393Z" level=info msg="Forcibly stopping sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\"" Mar 17 17:49:36.575744 containerd[1502]: time="2025-03-17T17:49:36.575674195Z" level=info msg="TearDown network for sandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" successfully" Mar 17 17:49:36.580340 containerd[1502]: time="2025-03-17T17:49:36.580260095Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.580594 containerd[1502]: time="2025-03-17T17:49:36.580401938Z" level=info msg="RemovePodSandbox \"aa38ddb3a74978faa98defc476924f7bea4a6614ddc04e685499739754e2c46c\" returns successfully" Mar 17 17:49:36.580961 containerd[1502]: time="2025-03-17T17:49:36.580921790Z" level=info msg="StopPodSandbox for \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\"" Mar 17 17:49:36.581049 containerd[1502]: time="2025-03-17T17:49:36.581026712Z" level=info msg="TearDown network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\" successfully" Mar 17 17:49:36.581049 containerd[1502]: time="2025-03-17T17:49:36.581042713Z" level=info msg="StopPodSandbox for \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\" returns successfully" Mar 17 17:49:36.581558 containerd[1502]: time="2025-03-17T17:49:36.581536203Z" level=info msg="RemovePodSandbox for \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\"" Mar 17 17:49:36.581634 containerd[1502]: time="2025-03-17T17:49:36.581565764Z" level=info msg="Forcibly stopping sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\"" Mar 17 17:49:36.581674 containerd[1502]: time="2025-03-17T17:49:36.581641766Z" level=info msg="TearDown network for sandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\" successfully" Mar 17 17:49:36.588480 containerd[1502]: time="2025-03-17T17:49:36.588311352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.588480 containerd[1502]: time="2025-03-17T17:49:36.588412514Z" level=info msg="RemovePodSandbox \"299f50ccf6450401a58b5a1664d16fd68b58fd7feb4d2c9a0f50a9c9342fbdbc\" returns successfully" Mar 17 17:49:36.589193 containerd[1502]: time="2025-03-17T17:49:36.589166011Z" level=info msg="StopPodSandbox for \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\"" Mar 17 17:49:36.589275 containerd[1502]: time="2025-03-17T17:49:36.589257413Z" level=info msg="TearDown network for sandbox \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" successfully" Mar 17 17:49:36.589336 containerd[1502]: time="2025-03-17T17:49:36.589274093Z" level=info msg="StopPodSandbox for \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" returns successfully" Mar 17 17:49:36.591460 containerd[1502]: time="2025-03-17T17:49:36.589797625Z" level=info msg="RemovePodSandbox for \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\"" Mar 17 17:49:36.591460 containerd[1502]: time="2025-03-17T17:49:36.589825826Z" level=info msg="Forcibly stopping sandbox \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\"" Mar 17 17:49:36.591460 containerd[1502]: time="2025-03-17T17:49:36.589885187Z" level=info msg="TearDown network for sandbox \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" successfully" Mar 17 17:49:36.594191 containerd[1502]: time="2025-03-17T17:49:36.594140760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.594326 containerd[1502]: time="2025-03-17T17:49:36.594218442Z" level=info msg="RemovePodSandbox \"93a7bf0797f4a14c12c2f50ba40b0ed805b290e23f8674424454b1770a580a7c\" returns successfully" Mar 17 17:49:36.596465 containerd[1502]: time="2025-03-17T17:49:36.595636113Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:36.596465 containerd[1502]: time="2025-03-17T17:49:36.595750596Z" level=info msg="TearDown network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" successfully" Mar 17 17:49:36.596465 containerd[1502]: time="2025-03-17T17:49:36.595760476Z" level=info msg="StopPodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" returns successfully" Mar 17 17:49:36.597009 containerd[1502]: time="2025-03-17T17:49:36.596911301Z" level=info msg="RemovePodSandbox for \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:36.597009 containerd[1502]: time="2025-03-17T17:49:36.596939262Z" level=info msg="Forcibly stopping sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\"" Mar 17 17:49:36.597729 containerd[1502]: time="2025-03-17T17:49:36.597643237Z" level=info msg="TearDown network for sandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" successfully" Mar 17 17:49:36.602611 containerd[1502]: time="2025-03-17T17:49:36.602418742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.603323 containerd[1502]: time="2025-03-17T17:49:36.602802031Z" level=info msg="RemovePodSandbox \"1dff5da5b48a9a4c38d977a15b3cbbbed28d1774b5cb474f2ca6ba3016afc107\" returns successfully" Mar 17 17:49:36.604407 containerd[1502]: time="2025-03-17T17:49:36.604091299Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" Mar 17 17:49:36.604527 containerd[1502]: time="2025-03-17T17:49:36.604498628Z" level=info msg="TearDown network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" successfully" Mar 17 17:49:36.604562 containerd[1502]: time="2025-03-17T17:49:36.604540269Z" level=info msg="StopPodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" returns successfully" Mar 17 17:49:36.605451 containerd[1502]: time="2025-03-17T17:49:36.605044320Z" level=info msg="RemovePodSandbox for \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" Mar 17 17:49:36.605451 containerd[1502]: time="2025-03-17T17:49:36.605086201Z" level=info msg="Forcibly stopping sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\"" Mar 17 17:49:36.605451 containerd[1502]: time="2025-03-17T17:49:36.605156722Z" level=info msg="TearDown network for sandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" successfully" Mar 17 17:49:36.610328 containerd[1502]: time="2025-03-17T17:49:36.610275475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.610722 containerd[1502]: time="2025-03-17T17:49:36.610502480Z" level=info msg="RemovePodSandbox \"be9d0d3c0e3de5fd2fe723077907e17a8efc182ed378668b518d7af28e65a077\" returns successfully" Mar 17 17:49:36.611335 containerd[1502]: time="2025-03-17T17:49:36.611261936Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\"" Mar 17 17:49:36.611438 containerd[1502]: time="2025-03-17T17:49:36.611414140Z" level=info msg="TearDown network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" successfully" Mar 17 17:49:36.611438 containerd[1502]: time="2025-03-17T17:49:36.611428020Z" level=info msg="StopPodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" returns successfully" Mar 17 17:49:36.612377 containerd[1502]: time="2025-03-17T17:49:36.611959632Z" level=info msg="RemovePodSandbox for \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\"" Mar 17 17:49:36.612377 containerd[1502]: time="2025-03-17T17:49:36.611990152Z" level=info msg="Forcibly stopping sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\"" Mar 17 17:49:36.612377 containerd[1502]: time="2025-03-17T17:49:36.612057714Z" level=info msg="TearDown network for sandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" successfully" Mar 17 17:49:36.616847 containerd[1502]: time="2025-03-17T17:49:36.616787698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.617436 containerd[1502]: time="2025-03-17T17:49:36.616858859Z" level=info msg="RemovePodSandbox \"6ff3be501afb82ce2a309c76ebfc85c9ae7def47346ddfc0bce2a5bb5773db9c\" returns successfully" Mar 17 17:49:36.617436 containerd[1502]: time="2025-03-17T17:49:36.617252948Z" level=info msg="StopPodSandbox for \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\"" Mar 17 17:49:36.617436 containerd[1502]: time="2025-03-17T17:49:36.617384231Z" level=info msg="TearDown network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" successfully" Mar 17 17:49:36.617436 containerd[1502]: time="2025-03-17T17:49:36.617396671Z" level=info msg="StopPodSandbox for \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" returns successfully" Mar 17 17:49:36.618595 containerd[1502]: time="2025-03-17T17:49:36.618470895Z" level=info msg="RemovePodSandbox for \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\"" Mar 17 17:49:36.618595 containerd[1502]: time="2025-03-17T17:49:36.618528136Z" level=info msg="Forcibly stopping sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\"" Mar 17 17:49:36.618694 containerd[1502]: time="2025-03-17T17:49:36.618628578Z" level=info msg="TearDown network for sandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" successfully" Mar 17 17:49:36.623437 containerd[1502]: time="2025-03-17T17:49:36.623384643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.623689 containerd[1502]: time="2025-03-17T17:49:36.623467125Z" level=info msg="RemovePodSandbox \"460de46b205d14294d33f9cf366f36b8b70553a8cced18e755ea82721ee4a278\" returns successfully" Mar 17 17:49:36.624950 containerd[1502]: time="2025-03-17T17:49:36.624880516Z" level=info msg="StopPodSandbox for \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\"" Mar 17 17:49:36.625137 containerd[1502]: time="2025-03-17T17:49:36.625044399Z" level=info msg="TearDown network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\" successfully" Mar 17 17:49:36.625186 containerd[1502]: time="2025-03-17T17:49:36.625142441Z" level=info msg="StopPodSandbox for \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\" returns successfully" Mar 17 17:49:36.625511 containerd[1502]: time="2025-03-17T17:49:36.625479889Z" level=info msg="RemovePodSandbox for \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\"" Mar 17 17:49:36.625511 containerd[1502]: time="2025-03-17T17:49:36.625511609Z" level=info msg="Forcibly stopping sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\"" Mar 17 17:49:36.625617 containerd[1502]: time="2025-03-17T17:49:36.625575891Z" level=info msg="TearDown network for sandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\" successfully" Mar 17 17:49:36.631979 containerd[1502]: time="2025-03-17T17:49:36.631913590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.631979 containerd[1502]: time="2025-03-17T17:49:36.631983752Z" level=info msg="RemovePodSandbox \"b77d6d76de543717f9f755f7459d56e343782aaba4eed12fea662f1e776c7f0f\" returns successfully" Mar 17 17:49:36.632562 containerd[1502]: time="2025-03-17T17:49:36.632533724Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:36.632686 containerd[1502]: time="2025-03-17T17:49:36.632639726Z" level=info msg="TearDown network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" successfully" Mar 17 17:49:36.632806 containerd[1502]: time="2025-03-17T17:49:36.632782329Z" level=info msg="StopPodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" returns successfully" Mar 17 17:49:36.633218 containerd[1502]: time="2025-03-17T17:49:36.633176698Z" level=info msg="RemovePodSandbox for \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:36.633218 containerd[1502]: time="2025-03-17T17:49:36.633206059Z" level=info msg="Forcibly stopping sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\"" Mar 17 17:49:36.633422 containerd[1502]: time="2025-03-17T17:49:36.633268540Z" level=info msg="TearDown network for sandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" successfully" Mar 17 17:49:36.639439 containerd[1502]: time="2025-03-17T17:49:36.638610537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.639863 containerd[1502]: time="2025-03-17T17:49:36.639702841Z" level=info msg="RemovePodSandbox \"7cdb14e88887b787b4a913857ebcf67e49e3f316f906343d263291a3b4c3f350\" returns successfully" Mar 17 17:49:36.641121 containerd[1502]: time="2025-03-17T17:49:36.640283014Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" Mar 17 17:49:36.641121 containerd[1502]: time="2025-03-17T17:49:36.640454138Z" level=info msg="TearDown network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" successfully" Mar 17 17:49:36.641121 containerd[1502]: time="2025-03-17T17:49:36.640466538Z" level=info msg="StopPodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" returns successfully" Mar 17 17:49:36.642659 containerd[1502]: time="2025-03-17T17:49:36.641868609Z" level=info msg="RemovePodSandbox for \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" Mar 17 17:49:36.642659 containerd[1502]: time="2025-03-17T17:49:36.641901210Z" level=info msg="Forcibly stopping sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\"" Mar 17 17:49:36.642659 containerd[1502]: time="2025-03-17T17:49:36.641970291Z" level=info msg="TearDown network for sandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" successfully" Mar 17 17:49:36.648782 containerd[1502]: time="2025-03-17T17:49:36.648741640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.649182 containerd[1502]: time="2025-03-17T17:49:36.649055887Z" level=info msg="RemovePodSandbox \"d637fc31c9806cc6e5d6af7003fc30f00b0b9615dcfdbd44bd63cf129846b8e1\" returns successfully" Mar 17 17:49:36.650432 containerd[1502]: time="2025-03-17T17:49:36.650060909Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\"" Mar 17 17:49:36.650432 containerd[1502]: time="2025-03-17T17:49:36.650167351Z" level=info msg="TearDown network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" successfully" Mar 17 17:49:36.650432 containerd[1502]: time="2025-03-17T17:49:36.650177071Z" level=info msg="StopPodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" returns successfully" Mar 17 17:49:36.651614 containerd[1502]: time="2025-03-17T17:49:36.650866566Z" level=info msg="RemovePodSandbox for \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\"" Mar 17 17:49:36.651614 containerd[1502]: time="2025-03-17T17:49:36.650900567Z" level=info msg="Forcibly stopping sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\"" Mar 17 17:49:36.651614 containerd[1502]: time="2025-03-17T17:49:36.650964089Z" level=info msg="TearDown network for sandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" successfully" Mar 17 17:49:36.663437 containerd[1502]: time="2025-03-17T17:49:36.663326760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.663575 containerd[1502]: time="2025-03-17T17:49:36.663465763Z" level=info msg="RemovePodSandbox \"f9753d1471433da77abe7aac03c78320733c993813de648ff7bfda1724ad0dca\" returns successfully" Mar 17 17:49:36.667933 containerd[1502]: time="2025-03-17T17:49:36.667893741Z" level=info msg="StopPodSandbox for \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\"" Mar 17 17:49:36.668047 containerd[1502]: time="2025-03-17T17:49:36.668011103Z" level=info msg="TearDown network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" successfully" Mar 17 17:49:36.668047 containerd[1502]: time="2025-03-17T17:49:36.668021143Z" level=info msg="StopPodSandbox for \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" returns successfully" Mar 17 17:49:36.669258 containerd[1502]: time="2025-03-17T17:49:36.669218970Z" level=info msg="RemovePodSandbox for \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\"" Mar 17 17:49:36.669258 containerd[1502]: time="2025-03-17T17:49:36.669258811Z" level=info msg="Forcibly stopping sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\"" Mar 17 17:49:36.669856 containerd[1502]: time="2025-03-17T17:49:36.669345572Z" level=info msg="TearDown network for sandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" successfully" Mar 17 17:49:36.674905 containerd[1502]: time="2025-03-17T17:49:36.674752531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.674905 containerd[1502]: time="2025-03-17T17:49:36.674839733Z" level=info msg="RemovePodSandbox \"806e9a47ffad2f2710a0ed3a2e3bf4f9af617ea74d49bcbaaf4f68de078189b1\" returns successfully" Mar 17 17:49:36.676837 containerd[1502]: time="2025-03-17T17:49:36.675973758Z" level=info msg="StopPodSandbox for \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\"" Mar 17 17:49:36.676837 containerd[1502]: time="2025-03-17T17:49:36.676082760Z" level=info msg="TearDown network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\" successfully" Mar 17 17:49:36.676837 containerd[1502]: time="2025-03-17T17:49:36.676093161Z" level=info msg="StopPodSandbox for \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\" returns successfully" Mar 17 17:49:36.677001 containerd[1502]: time="2025-03-17T17:49:36.676856497Z" level=info msg="RemovePodSandbox for \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\"" Mar 17 17:49:36.677001 containerd[1502]: time="2025-03-17T17:49:36.676883618Z" level=info msg="Forcibly stopping sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\"" Mar 17 17:49:36.677001 containerd[1502]: time="2025-03-17T17:49:36.676948139Z" level=info msg="TearDown network for sandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\" successfully" Mar 17 17:49:36.682306 containerd[1502]: time="2025-03-17T17:49:36.682151014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.682306 containerd[1502]: time="2025-03-17T17:49:36.682238456Z" level=info msg="RemovePodSandbox \"63afed7b7f8ca439824d7424a60f2381fec03c10f1be9c3aa67bd8576e3ba107\" returns successfully" Mar 17 17:49:36.683344 containerd[1502]: time="2025-03-17T17:49:36.682811308Z" level=info msg="StopPodSandbox for \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\"" Mar 17 17:49:36.683344 containerd[1502]: time="2025-03-17T17:49:36.682908110Z" level=info msg="TearDown network for sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" successfully" Mar 17 17:49:36.683344 containerd[1502]: time="2025-03-17T17:49:36.682919031Z" level=info msg="StopPodSandbox for \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" returns successfully" Mar 17 17:49:36.684198 containerd[1502]: time="2025-03-17T17:49:36.684151578Z" level=info msg="RemovePodSandbox for \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\"" Mar 17 17:49:36.684198 containerd[1502]: time="2025-03-17T17:49:36.684199259Z" level=info msg="Forcibly stopping sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\"" Mar 17 17:49:36.684313 containerd[1502]: time="2025-03-17T17:49:36.684254100Z" level=info msg="TearDown network for sandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" successfully" Mar 17 17:49:36.691204 containerd[1502]: time="2025-03-17T17:49:36.689996626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.691204 containerd[1502]: time="2025-03-17T17:49:36.690127109Z" level=info msg="RemovePodSandbox \"8b52cc952052b494e88a6129435e298d8d517632ce47ad3d88d8e7adcdcd734f\" returns successfully" Mar 17 17:49:36.691204 containerd[1502]: time="2025-03-17T17:49:36.690955807Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:36.691204 containerd[1502]: time="2025-03-17T17:49:36.691062850Z" level=info msg="TearDown network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" successfully" Mar 17 17:49:36.691204 containerd[1502]: time="2025-03-17T17:49:36.691072930Z" level=info msg="StopPodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" returns successfully" Mar 17 17:49:36.691522 containerd[1502]: time="2025-03-17T17:49:36.691496979Z" level=info msg="RemovePodSandbox for \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:36.691522 containerd[1502]: time="2025-03-17T17:49:36.691522060Z" level=info msg="Forcibly stopping sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\"" Mar 17 17:49:36.691618 containerd[1502]: time="2025-03-17T17:49:36.691592381Z" level=info msg="TearDown network for sandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" successfully" Mar 17 17:49:36.695991 containerd[1502]: time="2025-03-17T17:49:36.695931436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.696140 containerd[1502]: time="2025-03-17T17:49:36.696019478Z" level=info msg="RemovePodSandbox \"9e46b2198d3239a8ba44b75be8f87080bfe85e6859fb39cb220f20d2b3514a90\" returns successfully" Mar 17 17:49:36.697611 containerd[1502]: time="2025-03-17T17:49:36.697554672Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" Mar 17 17:49:36.697711 containerd[1502]: time="2025-03-17T17:49:36.697678355Z" level=info msg="TearDown network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" successfully" Mar 17 17:49:36.697711 containerd[1502]: time="2025-03-17T17:49:36.697688555Z" level=info msg="StopPodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" returns successfully" Mar 17 17:49:36.698170 containerd[1502]: time="2025-03-17T17:49:36.698125605Z" level=info msg="RemovePodSandbox for \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" Mar 17 17:49:36.698170 containerd[1502]: time="2025-03-17T17:49:36.698162646Z" level=info msg="Forcibly stopping sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\"" Mar 17 17:49:36.698258 containerd[1502]: time="2025-03-17T17:49:36.698245567Z" level=info msg="TearDown network for sandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" successfully" Mar 17 17:49:36.701943 containerd[1502]: time="2025-03-17T17:49:36.701879007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.702037 containerd[1502]: time="2025-03-17T17:49:36.701994730Z" level=info msg="RemovePodSandbox \"c7eb8b2495e502d5b1c4bde919c607b255c2200ce921e5c14a0a204f739f7061\" returns successfully" Mar 17 17:49:36.702510 containerd[1502]: time="2025-03-17T17:49:36.702481660Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\"" Mar 17 17:49:36.702510 containerd[1502]: time="2025-03-17T17:49:36.702592503Z" level=info msg="TearDown network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" successfully" Mar 17 17:49:36.702510 containerd[1502]: time="2025-03-17T17:49:36.702605663Z" level=info msg="StopPodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" returns successfully" Mar 17 17:49:36.702905 containerd[1502]: time="2025-03-17T17:49:36.702890829Z" level=info msg="RemovePodSandbox for \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\"" Mar 17 17:49:36.702936 containerd[1502]: time="2025-03-17T17:49:36.702912470Z" level=info msg="Forcibly stopping sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\"" Mar 17 17:49:36.704035 containerd[1502]: time="2025-03-17T17:49:36.702974391Z" level=info msg="TearDown network for sandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" successfully" Mar 17 17:49:36.706931 containerd[1502]: time="2025-03-17T17:49:36.706879277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.707044 containerd[1502]: time="2025-03-17T17:49:36.706956559Z" level=info msg="RemovePodSandbox \"5ae5be77dd31eea2aa0b972e222061b7b2f72951bd8a2b4460c761aff8d3fe8a\" returns successfully" Mar 17 17:49:36.707780 containerd[1502]: time="2025-03-17T17:49:36.707646974Z" level=info msg="StopPodSandbox for \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\"" Mar 17 17:49:36.707862 containerd[1502]: time="2025-03-17T17:49:36.707848058Z" level=info msg="TearDown network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" successfully" Mar 17 17:49:36.707862 containerd[1502]: time="2025-03-17T17:49:36.707859499Z" level=info msg="StopPodSandbox for \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" returns successfully" Mar 17 17:49:36.709160 containerd[1502]: time="2025-03-17T17:49:36.708621835Z" level=info msg="RemovePodSandbox for \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\"" Mar 17 17:49:36.709160 containerd[1502]: time="2025-03-17T17:49:36.708650316Z" level=info msg="Forcibly stopping sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\"" Mar 17 17:49:36.709160 containerd[1502]: time="2025-03-17T17:49:36.708718317Z" level=info msg="TearDown network for sandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" successfully" Mar 17 17:49:36.714907 containerd[1502]: time="2025-03-17T17:49:36.714842812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.715383 containerd[1502]: time="2025-03-17T17:49:36.714918014Z" level=info msg="RemovePodSandbox \"d9cc389b2190f561758e428dba9d223172b3bed798b1686f97d3043e2792b361\" returns successfully" Mar 17 17:49:36.715481 containerd[1502]: time="2025-03-17T17:49:36.715453345Z" level=info msg="StopPodSandbox for \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\"" Mar 17 17:49:36.715578 containerd[1502]: time="2025-03-17T17:49:36.715563308Z" level=info msg="TearDown network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\" successfully" Mar 17 17:49:36.715610 containerd[1502]: time="2025-03-17T17:49:36.715579308Z" level=info msg="StopPodSandbox for \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\" returns successfully" Mar 17 17:49:36.716057 containerd[1502]: time="2025-03-17T17:49:36.715953356Z" level=info msg="RemovePodSandbox for \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\"" Mar 17 17:49:36.716110 containerd[1502]: time="2025-03-17T17:49:36.716067879Z" level=info msg="Forcibly stopping sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\"" Mar 17 17:49:36.716154 containerd[1502]: time="2025-03-17T17:49:36.716136520Z" level=info msg="TearDown network for sandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\" successfully" Mar 17 17:49:36.719591 containerd[1502]: time="2025-03-17T17:49:36.719489394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.719675 containerd[1502]: time="2025-03-17T17:49:36.719620037Z" level=info msg="RemovePodSandbox \"ea4343846c34b12dd55a40ab9a16eff96a678b1eb757912364dbf7960cb7d152\" returns successfully" Mar 17 17:49:36.721630 containerd[1502]: time="2025-03-17T17:49:36.721439117Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:36.721630 containerd[1502]: time="2025-03-17T17:49:36.721541719Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:36.721630 containerd[1502]: time="2025-03-17T17:49:36.721552199Z" level=info msg="StopPodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:36.722056 containerd[1502]: time="2025-03-17T17:49:36.722019690Z" level=info msg="RemovePodSandbox for \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:36.722056 containerd[1502]: time="2025-03-17T17:49:36.722051250Z" level=info msg="Forcibly stopping sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\"" Mar 17 17:49:36.722145 containerd[1502]: time="2025-03-17T17:49:36.722121572Z" level=info msg="TearDown network for sandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" successfully" Mar 17 17:49:36.726259 containerd[1502]: time="2025-03-17T17:49:36.726209382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.726396 containerd[1502]: time="2025-03-17T17:49:36.726285783Z" level=info msg="RemovePodSandbox \"8fb1bb25f908105cb98a2791276fc3455efa9ddb360638dc6ca2bc3b9b2a9dc2\" returns successfully" Mar 17 17:49:36.726833 containerd[1502]: time="2025-03-17T17:49:36.726779554Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:36.726914 containerd[1502]: time="2025-03-17T17:49:36.726888797Z" level=info msg="TearDown network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" successfully" Mar 17 17:49:36.726914 containerd[1502]: time="2025-03-17T17:49:36.726899597Z" level=info msg="StopPodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" returns successfully" Mar 17 17:49:36.727701 containerd[1502]: time="2025-03-17T17:49:36.727223684Z" level=info msg="RemovePodSandbox for \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:36.727701 containerd[1502]: time="2025-03-17T17:49:36.727257965Z" level=info msg="Forcibly stopping sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\"" Mar 17 17:49:36.727701 containerd[1502]: time="2025-03-17T17:49:36.727389768Z" level=info msg="TearDown network for sandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" successfully" Mar 17 17:49:36.735107 containerd[1502]: time="2025-03-17T17:49:36.735043176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.736198 containerd[1502]: time="2025-03-17T17:49:36.735131178Z" level=info msg="RemovePodSandbox \"ada565c7b490ae665f8c3f889c1ad5362cf3573590ddf80442660037191a7268\" returns successfully" Mar 17 17:49:36.736198 containerd[1502]: time="2025-03-17T17:49:36.735666989Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" Mar 17 17:49:36.736198 containerd[1502]: time="2025-03-17T17:49:36.735761952Z" level=info msg="TearDown network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" successfully" Mar 17 17:49:36.736198 containerd[1502]: time="2025-03-17T17:49:36.735772552Z" level=info msg="StopPodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" returns successfully" Mar 17 17:49:36.737075 containerd[1502]: time="2025-03-17T17:49:36.737009099Z" level=info msg="RemovePodSandbox for \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" Mar 17 17:49:36.737075 containerd[1502]: time="2025-03-17T17:49:36.737032659Z" level=info msg="Forcibly stopping sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\"" Mar 17 17:49:36.737122 containerd[1502]: time="2025-03-17T17:49:36.737098421Z" level=info msg="TearDown network for sandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" successfully" Mar 17 17:49:36.740620 containerd[1502]: time="2025-03-17T17:49:36.740574257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.740725 containerd[1502]: time="2025-03-17T17:49:36.740659939Z" level=info msg="RemovePodSandbox \"b5c0676748c1f2c2d93f6118d73e3d036f47f28a8d14b1cf337ce808891f29e7\" returns successfully" Mar 17 17:49:36.741116 containerd[1502]: time="2025-03-17T17:49:36.741088349Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\"" Mar 17 17:49:36.742023 containerd[1502]: time="2025-03-17T17:49:36.741197151Z" level=info msg="TearDown network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" successfully" Mar 17 17:49:36.742023 containerd[1502]: time="2025-03-17T17:49:36.741218191Z" level=info msg="StopPodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" returns successfully" Mar 17 17:49:36.742517 containerd[1502]: time="2025-03-17T17:49:36.742487739Z" level=info msg="RemovePodSandbox for \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\"" Mar 17 17:49:36.742517 containerd[1502]: time="2025-03-17T17:49:36.742517740Z" level=info msg="Forcibly stopping sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\"" Mar 17 17:49:36.742631 containerd[1502]: time="2025-03-17T17:49:36.742581101Z" level=info msg="TearDown network for sandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" successfully" Mar 17 17:49:36.745884 containerd[1502]: time="2025-03-17T17:49:36.745832933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.745966 containerd[1502]: time="2025-03-17T17:49:36.745910534Z" level=info msg="RemovePodSandbox \"deff0bcb67c31577b256c98034bd60367e6a3f9dbac8ffcf6cb6b40e8602a727\" returns successfully" Mar 17 17:49:36.746373 containerd[1502]: time="2025-03-17T17:49:36.746286543Z" level=info msg="StopPodSandbox for \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\"" Mar 17 17:49:36.746449 containerd[1502]: time="2025-03-17T17:49:36.746429266Z" level=info msg="TearDown network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" successfully" Mar 17 17:49:36.746494 containerd[1502]: time="2025-03-17T17:49:36.746448866Z" level=info msg="StopPodSandbox for \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" returns successfully" Mar 17 17:49:36.746771 containerd[1502]: time="2025-03-17T17:49:36.746748633Z" level=info msg="RemovePodSandbox for \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\"" Mar 17 17:49:36.746811 containerd[1502]: time="2025-03-17T17:49:36.746781234Z" level=info msg="Forcibly stopping sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\"" Mar 17 17:49:36.746869 containerd[1502]: time="2025-03-17T17:49:36.746852115Z" level=info msg="TearDown network for sandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" successfully" Mar 17 17:49:36.751485 containerd[1502]: time="2025-03-17T17:49:36.751234851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.751485 containerd[1502]: time="2025-03-17T17:49:36.751389055Z" level=info msg="RemovePodSandbox \"512226986e7a5f990988a8cf6a8137033a1276e538c4de1e9bbc9e14ce57650b\" returns successfully" Mar 17 17:49:36.753236 containerd[1502]: time="2025-03-17T17:49:36.753165494Z" level=info msg="StopPodSandbox for \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\"" Mar 17 17:49:36.753463 containerd[1502]: time="2025-03-17T17:49:36.753278136Z" level=info msg="TearDown network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\" successfully" Mar 17 17:49:36.753463 containerd[1502]: time="2025-03-17T17:49:36.753289657Z" level=info msg="StopPodSandbox for \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\" returns successfully" Mar 17 17:49:36.753919 containerd[1502]: time="2025-03-17T17:49:36.753821828Z" level=info msg="RemovePodSandbox for \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\"" Mar 17 17:49:36.753919 containerd[1502]: time="2025-03-17T17:49:36.753902230Z" level=info msg="Forcibly stopping sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\"" Mar 17 17:49:36.754027 containerd[1502]: time="2025-03-17T17:49:36.753969872Z" level=info msg="TearDown network for sandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\" successfully" Mar 17 17:49:36.758067 containerd[1502]: time="2025-03-17T17:49:36.757986840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:36.758278 containerd[1502]: time="2025-03-17T17:49:36.758112123Z" level=info msg="RemovePodSandbox \"9ead9c4e2a44e418f76ce2c5208ca5444d95f73c84b7d7d94844c546dcab7616\" returns successfully" Mar 17 17:49:54.104001 kubelet[2846]: I0317 17:49:54.103906 2846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:54.131247 kubelet[2846]: I0317 17:49:54.131160 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5488cf45bd-5sl5c" podStartSLOduration=23.1311385 podStartE2EDuration="23.1311385s" podCreationTimestamp="2025-03-17 17:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:49:35.09074816 +0000 UTC m=+59.021024834" watchObservedRunningTime="2025-03-17 17:49:54.1311385 +0000 UTC m=+78.061415254" Mar 17 17:51:03.585417 systemd[1]: run-containerd-runc-k8s.io-3e38cd4d55abcf77b3f7afad257f2e26e9391e732f1054775fe756199b0fe226-runc.JwXaSV.mount: Deactivated successfully. Mar 17 17:52:29.293784 systemd[1]: run-containerd-runc-k8s.io-3e38cd4d55abcf77b3f7afad257f2e26e9391e732f1054775fe756199b0fe226-runc.g1uQTU.mount: Deactivated successfully. Mar 17 17:52:36.380720 systemd[1]: Started sshd@7-138.199.148.212:22-199.45.155.75:50600.service - OpenSSH per-connection server daemon (199.45.155.75:50600). Mar 17 17:52:51.443796 sshd[7109]: Connection closed by 199.45.155.75 port 50600 [preauth] Mar 17 17:52:51.445265 systemd[1]: sshd@7-138.199.148.212:22-199.45.155.75:50600.service: Deactivated successfully. Mar 17 17:53:26.764714 systemd[1]: Started sshd@8-138.199.148.212:22-139.178.89.65:55348.service - OpenSSH per-connection server daemon (139.178.89.65:55348). Mar 17 17:53:27.754592 sshd[7179]: Accepted publickey for core from 139.178.89.65 port 55348 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:27.757522 sshd-session[7179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:27.767650 systemd-logind[1479]: New session 8 of user core. Mar 17 17:53:27.772685 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:53:28.515698 sshd[7181]: Connection closed by 139.178.89.65 port 55348 Mar 17 17:53:28.516590 sshd-session[7179]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:28.522269 systemd[1]: sshd@8-138.199.148.212:22-139.178.89.65:55348.service: Deactivated successfully. Mar 17 17:53:28.524630 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:53:28.525443 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:53:28.526947 systemd-logind[1479]: Removed session 8. Mar 17 17:53:33.691754 systemd[1]: Started sshd@9-138.199.148.212:22-139.178.89.65:42542.service - OpenSSH per-connection server daemon (139.178.89.65:42542). Mar 17 17:53:34.685879 sshd[7252]: Accepted publickey for core from 139.178.89.65 port 42542 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:34.688768 sshd-session[7252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:34.694646 systemd-logind[1479]: New session 9 of user core. Mar 17 17:53:34.705151 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:53:35.444075 sshd[7254]: Connection closed by 139.178.89.65 port 42542 Mar 17 17:53:35.445027 sshd-session[7252]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:35.450411 systemd[1]: sshd@9-138.199.148.212:22-139.178.89.65:42542.service: Deactivated successfully. Mar 17 17:53:35.453909 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:53:35.455700 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:53:35.456703 systemd-logind[1479]: Removed session 9. Mar 17 17:53:40.628877 systemd[1]: Started sshd@10-138.199.148.212:22-139.178.89.65:42558.service - OpenSSH per-connection server daemon (139.178.89.65:42558). Mar 17 17:53:41.623682 sshd[7269]: Accepted publickey for core from 139.178.89.65 port 42558 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:41.626069 sshd-session[7269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:41.632586 systemd-logind[1479]: New session 10 of user core. Mar 17 17:53:41.638721 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:53:42.390934 sshd[7271]: Connection closed by 139.178.89.65 port 42558 Mar 17 17:53:42.391589 sshd-session[7269]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:42.397431 systemd[1]: sshd@10-138.199.148.212:22-139.178.89.65:42558.service: Deactivated successfully. Mar 17 17:53:42.399703 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:53:42.401149 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:53:42.402692 systemd-logind[1479]: Removed session 10. Mar 17 17:53:42.570879 systemd[1]: Started sshd@11-138.199.148.212:22-139.178.89.65:53346.service - OpenSSH per-connection server daemon (139.178.89.65:53346). Mar 17 17:53:43.555498 sshd[7284]: Accepted publickey for core from 139.178.89.65 port 53346 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:43.557301 sshd-session[7284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:43.563594 systemd-logind[1479]: New session 11 of user core. Mar 17 17:53:43.568662 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:53:44.365756 sshd[7286]: Connection closed by 139.178.89.65 port 53346 Mar 17 17:53:44.366310 sshd-session[7284]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:44.370459 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:53:44.370831 systemd[1]: sshd@11-138.199.148.212:22-139.178.89.65:53346.service: Deactivated successfully. Mar 17 17:53:44.374476 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:53:44.378175 systemd-logind[1479]: Removed session 11. Mar 17 17:53:44.547471 systemd[1]: Started sshd@12-138.199.148.212:22-139.178.89.65:53360.service - OpenSSH per-connection server daemon (139.178.89.65:53360). Mar 17 17:53:45.533948 sshd[7300]: Accepted publickey for core from 139.178.89.65 port 53360 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:45.537082 sshd-session[7300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:45.542946 systemd-logind[1479]: New session 12 of user core. Mar 17 17:53:45.547552 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:53:46.288075 sshd[7302]: Connection closed by 139.178.89.65 port 53360 Mar 17 17:53:46.289444 sshd-session[7300]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:46.295924 systemd[1]: sshd@12-138.199.148.212:22-139.178.89.65:53360.service: Deactivated successfully. Mar 17 17:53:46.299700 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:53:46.301103 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:53:46.302103 systemd-logind[1479]: Removed session 12. Mar 17 17:53:51.470971 systemd[1]: Started sshd@13-138.199.148.212:22-139.178.89.65:49614.service - OpenSSH per-connection server daemon (139.178.89.65:49614). Mar 17 17:53:52.452415 sshd[7314]: Accepted publickey for core from 139.178.89.65 port 49614 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:52.454259 sshd-session[7314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:52.459723 systemd-logind[1479]: New session 13 of user core. Mar 17 17:53:52.464602 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:53:53.207503 sshd[7318]: Connection closed by 139.178.89.65 port 49614 Mar 17 17:53:53.209245 sshd-session[7314]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:53.215417 systemd[1]: sshd@13-138.199.148.212:22-139.178.89.65:49614.service: Deactivated successfully. Mar 17 17:53:53.218550 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:53:53.220120 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:53:53.221693 systemd-logind[1479]: Removed session 13. Mar 17 17:53:53.389798 systemd[1]: Started sshd@14-138.199.148.212:22-139.178.89.65:49622.service - OpenSSH per-connection server daemon (139.178.89.65:49622). Mar 17 17:53:54.381690 sshd[7329]: Accepted publickey for core from 139.178.89.65 port 49622 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:54.383655 sshd-session[7329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:54.389680 systemd-logind[1479]: New session 14 of user core. Mar 17 17:53:54.400720 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:53:55.256900 sshd[7331]: Connection closed by 139.178.89.65 port 49622 Mar 17 17:53:55.258799 sshd-session[7329]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:55.263571 systemd[1]: sshd@14-138.199.148.212:22-139.178.89.65:49622.service: Deactivated successfully. Mar 17 17:53:55.266569 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:53:55.267894 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:53:55.269028 systemd-logind[1479]: Removed session 14. Mar 17 17:53:55.431674 systemd[1]: Started sshd@15-138.199.148.212:22-139.178.89.65:49638.service - OpenSSH per-connection server daemon (139.178.89.65:49638). Mar 17 17:53:56.423966 sshd[7340]: Accepted publickey for core from 139.178.89.65 port 49638 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:56.426418 sshd-session[7340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:56.431283 systemd-logind[1479]: New session 15 of user core. Mar 17 17:53:56.443652 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:53:59.105127 sshd[7342]: Connection closed by 139.178.89.65 port 49638 Mar 17 17:53:59.106307 sshd-session[7340]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:59.112872 systemd[1]: sshd@15-138.199.148.212:22-139.178.89.65:49638.service: Deactivated successfully. Mar 17 17:53:59.116912 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:53:59.117234 systemd[1]: session-15.scope: Consumed 595ms CPU time, 75.8M memory peak. Mar 17 17:53:59.118078 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:53:59.119626 systemd-logind[1479]: Removed session 15. Mar 17 17:53:59.282692 systemd[1]: Started sshd@16-138.199.148.212:22-139.178.89.65:49646.service - OpenSSH per-connection server daemon (139.178.89.65:49646). Mar 17 17:54:00.286699 sshd[7359]: Accepted publickey for core from 139.178.89.65 port 49646 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:54:00.289664 sshd-session[7359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:00.297687 systemd-logind[1479]: New session 16 of user core. Mar 17 17:54:00.307706 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:54:01.294336 sshd[7361]: Connection closed by 139.178.89.65 port 49646 Mar 17 17:54:01.296655 sshd-session[7359]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:01.300291 systemd[1]: sshd@16-138.199.148.212:22-139.178.89.65:49646.service: Deactivated successfully. Mar 17 17:54:01.303410 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:54:01.305819 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:54:01.307606 systemd-logind[1479]: Removed session 16. Mar 17 17:54:01.472792 systemd[1]: Started sshd@17-138.199.148.212:22-139.178.89.65:47884.service - OpenSSH per-connection server daemon (139.178.89.65:47884). Mar 17 17:54:02.458066 sshd[7392]: Accepted publickey for core from 139.178.89.65 port 47884 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:54:02.459866 sshd-session[7392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:02.465062 systemd-logind[1479]: New session 17 of user core. Mar 17 17:54:02.470581 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:54:03.213464 sshd[7394]: Connection closed by 139.178.89.65 port 47884 Mar 17 17:54:03.214337 sshd-session[7392]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:03.218997 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:54:03.219961 systemd[1]: sshd@17-138.199.148.212:22-139.178.89.65:47884.service: Deactivated successfully. Mar 17 17:54:03.222398 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:54:03.224745 systemd-logind[1479]: Removed session 17. Mar 17 17:54:08.399765 systemd[1]: Started sshd@18-138.199.148.212:22-139.178.89.65:47898.service - OpenSSH per-connection server daemon (139.178.89.65:47898). Mar 17 17:54:09.387451 sshd[7428]: Accepted publickey for core from 139.178.89.65 port 47898 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:54:09.390697 sshd-session[7428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:09.396180 systemd-logind[1479]: New session 18 of user core. Mar 17 17:54:09.401764 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:54:10.142500 sshd[7430]: Connection closed by 139.178.89.65 port 47898 Mar 17 17:54:10.143217 sshd-session[7428]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:10.147391 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:54:10.149774 systemd[1]: sshd@18-138.199.148.212:22-139.178.89.65:47898.service: Deactivated successfully. Mar 17 17:54:10.153860 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:54:10.155518 systemd-logind[1479]: Removed session 18. Mar 17 17:54:15.323230 systemd[1]: Started sshd@19-138.199.148.212:22-139.178.89.65:50822.service - OpenSSH per-connection server daemon (139.178.89.65:50822). Mar 17 17:54:16.324449 sshd[7446]: Accepted publickey for core from 139.178.89.65 port 50822 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:54:16.326212 sshd-session[7446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:16.334021 systemd-logind[1479]: New session 19 of user core. Mar 17 17:54:16.339555 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:54:17.078202 sshd[7448]: Connection closed by 139.178.89.65 port 50822 Mar 17 17:54:17.078776 sshd-session[7446]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:17.084501 systemd[1]: sshd@19-138.199.148.212:22-139.178.89.65:50822.service: Deactivated successfully. Mar 17 17:54:17.086879 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:54:17.088080 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:54:17.089761 systemd-logind[1479]: Removed session 19. Mar 17 17:54:31.891580 update_engine[1482]: I20250317 17:54:31.891470 1482 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 17:54:31.891580 update_engine[1482]: I20250317 17:54:31.891535 1482 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 17:54:31.892300 update_engine[1482]: I20250317 17:54:31.891855 1482 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 17:54:31.893733 update_engine[1482]: I20250317 17:54:31.893687 1482 omaha_request_params.cc:62] Current group set to beta Mar 17 17:54:31.895011 update_engine[1482]: I20250317 17:54:31.894788 1482 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 17:54:31.895011 update_engine[1482]: I20250317 17:54:31.894816 1482 update_attempter.cc:643] Scheduling an action processor start. Mar 17 17:54:31.895011 update_engine[1482]: I20250317 17:54:31.894836 1482 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:54:31.898385 update_engine[1482]: I20250317 17:54:31.896903 1482 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 17:54:31.898385 update_engine[1482]: I20250317 17:54:31.897037 1482 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:54:31.898385 update_engine[1482]: I20250317 17:54:31.897053 1482 omaha_request_action.cc:272] Request: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: Mar 17 17:54:31.898385 update_engine[1482]: I20250317 17:54:31.897065 1482 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:54:31.902452 update_engine[1482]: I20250317 17:54:31.902130 1482 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:54:31.902734 update_engine[1482]: I20250317 17:54:31.902698 1482 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:54:31.903680 update_engine[1482]: E20250317 17:54:31.903476 1482 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:54:31.903680 update_engine[1482]: I20250317 17:54:31.903647 1482 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 17:54:31.904010 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 17:54:32.061411 systemd[1]: cri-containerd-f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a.scope: Deactivated successfully. Mar 17 17:54:32.061855 systemd[1]: cri-containerd-f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a.scope: Consumed 9.820s CPU time, 43.7M memory peak. Mar 17 17:54:32.088916 containerd[1502]: time="2025-03-17T17:54:32.088812329Z" level=info msg="shim disconnected" id=f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a namespace=k8s.io Mar 17 17:54:32.089447 containerd[1502]: time="2025-03-17T17:54:32.088915211Z" level=warning msg="cleaning up after shim disconnected" id=f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a namespace=k8s.io Mar 17 17:54:32.089447 containerd[1502]: time="2025-03-17T17:54:32.088935412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:32.090121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a-rootfs.mount: Deactivated successfully. Mar 17 17:54:32.485876 kubelet[2846]: E0317 17:54:32.485787 2846 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36320->10.0.0.2:2379: read: connection timed out" Mar 17 17:54:32.677648 systemd[1]: cri-containerd-b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f.scope: Deactivated successfully. Mar 17 17:54:32.678322 systemd[1]: cri-containerd-b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f.scope: Consumed 6.050s CPU time, 68.3M memory peak, 3.2M read from disk. Mar 17 17:54:32.712182 containerd[1502]: time="2025-03-17T17:54:32.711678965Z" level=info msg="shim disconnected" id=b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f namespace=k8s.io Mar 17 17:54:32.712182 containerd[1502]: time="2025-03-17T17:54:32.711739166Z" level=warning msg="cleaning up after shim disconnected" id=b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f namespace=k8s.io Mar 17 17:54:32.712182 containerd[1502]: time="2025-03-17T17:54:32.711747046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:32.711764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f-rootfs.mount: Deactivated successfully. Mar 17 17:54:32.810377 kubelet[2846]: I0317 17:54:32.809959 2846 scope.go:117] "RemoveContainer" containerID="b47085a503e3c8c06f9e027ff89e543e01c1b32ad28d05420ee713238a67db1f" Mar 17 17:54:32.812190 kubelet[2846]: I0317 17:54:32.811725 2846 scope.go:117] "RemoveContainer" containerID="f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a" Mar 17 17:54:32.814392 containerd[1502]: time="2025-03-17T17:54:32.814037740Z" level=info msg="CreateContainer within sandbox \"f27232dce0076f52f6722ab0aa6be763e8bac2f3cbdb8aa13ae1481c1b42e2e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 17:54:32.814970 containerd[1502]: time="2025-03-17T17:54:32.814857477Z" level=info msg="CreateContainer within sandbox \"e15f0a0a86ac615b7f0ff915a6f70e42b369b1a6e56ddc225f5ccc6f57a94df7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 17 17:54:32.828015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476184006.mount: Deactivated successfully. Mar 17 17:54:32.835895 containerd[1502]: time="2025-03-17T17:54:32.835590070Z" level=info msg="CreateContainer within sandbox \"e15f0a0a86ac615b7f0ff915a6f70e42b369b1a6e56ddc225f5ccc6f57a94df7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e\"" Mar 17 17:54:32.836953 containerd[1502]: time="2025-03-17T17:54:32.836828016Z" level=info msg="StartContainer for \"9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e\"" Mar 17 17:54:32.837574 containerd[1502]: time="2025-03-17T17:54:32.837487869Z" level=info msg="CreateContainer within sandbox \"f27232dce0076f52f6722ab0aa6be763e8bac2f3cbdb8aa13ae1481c1b42e2e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"470842e1702a2c2f1d2afb57259ae2f8bc28c0188cb55f8fdf238009090a73ff\"" Mar 17 17:54:32.838056 containerd[1502]: time="2025-03-17T17:54:32.837942959Z" level=info msg="StartContainer for \"470842e1702a2c2f1d2afb57259ae2f8bc28c0188cb55f8fdf238009090a73ff\"" Mar 17 17:54:32.873555 systemd[1]: Started cri-containerd-470842e1702a2c2f1d2afb57259ae2f8bc28c0188cb55f8fdf238009090a73ff.scope - libcontainer container 470842e1702a2c2f1d2afb57259ae2f8bc28c0188cb55f8fdf238009090a73ff. Mar 17 17:54:32.875415 systemd[1]: Started cri-containerd-9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e.scope - libcontainer container 9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e. Mar 17 17:54:32.910601 containerd[1502]: time="2025-03-17T17:54:32.910550234Z" level=info msg="StartContainer for \"9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e\" returns successfully" Mar 17 17:54:32.931050 containerd[1502]: time="2025-03-17T17:54:32.930979260Z" level=info msg="StartContainer for \"470842e1702a2c2f1d2afb57259ae2f8bc28c0188cb55f8fdf238009090a73ff\" returns successfully" Mar 17 17:54:36.902205 kubelet[2846]: E0317 17:54:36.901910 2846 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36154->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-1-0-b-a06069b96b.182da8a71498616b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-1-0-b-a06069b96b,UID:fae5a8f1a84b45a4b3e26871bce7df43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-b-a06069b96b,},FirstTimestamp:2025-03-17 17:54:26.420785515 +0000 UTC m=+350.351062229,LastTimestamp:2025-03-17 17:54:26.420785515 +0000 UTC m=+350.351062229,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-b-a06069b96b,}" Mar 17 17:54:37.613833 systemd[1]: cri-containerd-9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e.scope: Deactivated successfully. Mar 17 17:54:37.640860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e-rootfs.mount: Deactivated successfully. Mar 17 17:54:37.651083 containerd[1502]: time="2025-03-17T17:54:37.650797709Z" level=info msg="shim disconnected" id=9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e namespace=k8s.io Mar 17 17:54:37.651083 containerd[1502]: time="2025-03-17T17:54:37.650917871Z" level=warning msg="cleaning up after shim disconnected" id=9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e namespace=k8s.io Mar 17 17:54:37.651083 containerd[1502]: time="2025-03-17T17:54:37.650928071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:37.663998 containerd[1502]: time="2025-03-17T17:54:37.663942184Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:54:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:54:37.832890 kubelet[2846]: I0317 17:54:37.832216 2846 scope.go:117] "RemoveContainer" containerID="f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a" Mar 17 17:54:37.832890 kubelet[2846]: I0317 17:54:37.832574 2846 scope.go:117] "RemoveContainer" containerID="9f21bdaef721fd0abc4a5d3a7ee137879d58b941a3489d4e9dae4d6021734f5e" Mar 17 17:54:37.832890 kubelet[2846]: E0317 17:54:37.832801 2846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6479d6dc54-285h7_tigera-operator(8649dfee-cbea-4f1b-81e2-c3a36ca0da0f)\"" pod="tigera-operator/tigera-operator-6479d6dc54-285h7" podUID="8649dfee-cbea-4f1b-81e2-c3a36ca0da0f" Mar 17 17:54:37.834306 containerd[1502]: time="2025-03-17T17:54:37.834272551Z" level=info msg="RemoveContainer for \"f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a\"" Mar 17 17:54:37.839198 containerd[1502]: time="2025-03-17T17:54:37.839156974Z" level=info msg="RemoveContainer for \"f9b43802cce43d2ac17888fe6f21742bcfa781c472eabf385bb38c1a42e3b45a\" returns successfully" Mar 17 17:54:38.532335 systemd[1]: cri-containerd-403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f.scope: Deactivated successfully. Mar 17 17:54:38.533068 systemd[1]: cri-containerd-403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f.scope: Consumed 2.313s CPU time, 24.8M memory peak, 3.3M read from disk. Mar 17 17:54:38.557269 containerd[1502]: time="2025-03-17T17:54:38.557081579Z" level=info msg="shim disconnected" id=403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f namespace=k8s.io Mar 17 17:54:38.557775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f-rootfs.mount: Deactivated successfully. Mar 17 17:54:38.558594 containerd[1502]: time="2025-03-17T17:54:38.558251883Z" level=warning msg="cleaning up after shim disconnected" id=403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f namespace=k8s.io Mar 17 17:54:38.558594 containerd[1502]: time="2025-03-17T17:54:38.558278084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:38.572318 containerd[1502]: time="2025-03-17T17:54:38.571752006Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:54:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:54:38.840570 kubelet[2846]: I0317 17:54:38.840458 2846 scope.go:117] "RemoveContainer" containerID="403c8dc116a3b232e63df6ab58d678e2c97f0873d3e3b1c6f01aad4e5921397f" Mar 17 17:54:38.849886 containerd[1502]: time="2025-03-17T17:54:38.849483987Z" level=info msg="CreateContainer within sandbox \"9dd80424bac3f6bf44657149bf4ff87058bf619bdee2a78527430d5d2780aaa1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 17:54:38.864620 containerd[1502]: time="2025-03-17T17:54:38.864303058Z" level=info msg="CreateContainer within sandbox \"9dd80424bac3f6bf44657149bf4ff87058bf619bdee2a78527430d5d2780aaa1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"316e3b90dbf95f82b1f0e97d551b5fc24b06f1739ddea7f5ffc79167b974882f\"" Mar 17 17:54:38.866039 containerd[1502]: time="2025-03-17T17:54:38.864833829Z" level=info msg="StartContainer for \"316e3b90dbf95f82b1f0e97d551b5fc24b06f1739ddea7f5ffc79167b974882f\"" Mar 17 17:54:38.897669 systemd[1]: Started cri-containerd-316e3b90dbf95f82b1f0e97d551b5fc24b06f1739ddea7f5ffc79167b974882f.scope - libcontainer container 316e3b90dbf95f82b1f0e97d551b5fc24b06f1739ddea7f5ffc79167b974882f. Mar 17 17:54:38.940097 containerd[1502]: time="2025-03-17T17:54:38.939978044Z" level=info msg="StartContainer for \"316e3b90dbf95f82b1f0e97d551b5fc24b06f1739ddea7f5ffc79167b974882f\" returns successfully"