Feb 13 15:55:54.337626 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:55:54.337649 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:55:54.337657 kernel: KASLR enabled Feb 13 15:55:54.337663 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 15:55:54.337670 kernel: printk: bootconsole [pl11] enabled Feb 13 15:55:54.337676 kernel: efi: EFI v2.7 by EDK II Feb 13 15:55:54.337683 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 Feb 13 15:55:54.337689 kernel: random: crng init done Feb 13 15:55:54.337695 kernel: secureboot: Secure boot disabled Feb 13 15:55:54.337701 kernel: ACPI: Early table checksum verification disabled Feb 13 15:55:54.337707 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 15:55:54.337713 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337719 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337727 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 15:55:54.337734 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337741 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337747 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337755 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337761 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337767 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337774 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 15:55:54.337780 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337786 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 15:55:54.337793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 15:55:54.337799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 15:55:54.337805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 15:55:54.337812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 15:55:54.337818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 15:55:54.337825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 15:55:54.337832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 15:55:54.337838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 15:55:54.337844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 15:55:54.337850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 15:55:54.337857 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 15:55:54.337863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 15:55:54.337869 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Feb 13 15:55:54.337875 kernel: Zone ranges: Feb 13 15:55:54.337882 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 15:55:54.337888 kernel: DMA32 empty Feb 13 15:55:54.337894 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:55:54.337904 kernel: Movable zone start for each node Feb 13 15:55:54.337910 kernel: Early memory node ranges Feb 13 15:55:54.337917 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 15:55:54.340995 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Feb 13 15:55:54.341017 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 15:55:54.341032 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 15:55:54.341039 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 15:55:54.341046 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 15:55:54.341053 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:55:54.341061 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 15:55:54.341068 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 15:55:54.341075 kernel: psci: probing for conduit method from ACPI. Feb 13 15:55:54.341081 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:55:54.341088 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:55:54.341095 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 15:55:54.341101 kernel: psci: SMC Calling Convention v1.4 Feb 13 15:55:54.341108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 15:55:54.341117 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 15:55:54.341123 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:55:54.341130 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:55:54.341137 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:55:54.341144 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:55:54.341151 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:55:54.341157 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:55:54.341164 kernel: CPU features: detected: Spectre-BHB Feb 13 15:55:54.341177 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:55:54.341185 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:55:54.341192 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:55:54.341200 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 15:55:54.341207 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:55:54.341214 kernel: alternatives: applying boot alternatives Feb 13 15:55:54.341223 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:55:54.341230 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:55:54.341237 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:55:54.341244 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:55:54.341261 kernel: Fallback order for Node 0: 0 Feb 13 15:55:54.341267 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 15:55:54.341274 kernel: Policy zone: Normal Feb 13 15:55:54.341280 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:55:54.341289 kernel: software IO TLB: area num 2. Feb 13 15:55:54.341296 kernel: software IO TLB: mapped [mem 0x0000000036630000-0x000000003a630000] (64MB) Feb 13 15:55:54.341303 kernel: Memory: 3982436K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 211724K reserved, 0K cma-reserved) Feb 13 15:55:54.341310 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:55:54.341316 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:55:54.341324 kernel: rcu: RCU event tracing is enabled. Feb 13 15:55:54.341331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:55:54.341338 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:55:54.341345 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:55:54.341352 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:55:54.341358 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:55:54.341367 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:55:54.341373 kernel: GICv3: 960 SPIs implemented Feb 13 15:55:54.341380 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:55:54.341387 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:55:54.341393 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:55:54.341400 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 15:55:54.341407 kernel: ITS: No ITS available, not enabling LPIs Feb 13 15:55:54.341414 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:55:54.341421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:55:54.341428 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:55:54.341435 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:55:54.341442 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:55:54.341450 kernel: Console: colour dummy device 80x25 Feb 13 15:55:54.341457 kernel: printk: console [tty1] enabled Feb 13 15:55:54.341464 kernel: ACPI: Core revision 20230628 Feb 13 15:55:54.341471 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:55:54.341478 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:55:54.341485 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:55:54.341492 kernel: landlock: Up and running. Feb 13 15:55:54.341499 kernel: SELinux: Initializing. Feb 13 15:55:54.341506 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.341515 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.341522 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:55:54.341529 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:55:54.341536 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 15:55:54.341543 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 15:55:54.341550 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:55:54.341557 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:55:54.341570 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:55:54.341577 kernel: Remapping and enabling EFI services. Feb 13 15:55:54.341585 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:55:54.341592 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:55:54.341599 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 15:55:54.341608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:55:54.341615 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:55:54.341623 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:55:54.341630 kernel: SMP: Total of 2 processors activated. Feb 13 15:55:54.341637 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:55:54.341647 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 15:55:54.341654 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:55:54.341661 kernel: CPU features: detected: CRC32 instructions Feb 13 15:55:54.341669 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:55:54.341676 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:55:54.341683 kernel: CPU features: detected: Privileged Access Never Feb 13 15:55:54.341690 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:55:54.341698 kernel: alternatives: applying system-wide alternatives Feb 13 15:55:54.341705 kernel: devtmpfs: initialized Feb 13 15:55:54.341714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:55:54.341722 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:55:54.341729 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:55:54.341737 kernel: SMBIOS 3.1.0 present. Feb 13 15:55:54.341745 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 15:55:54.341752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:55:54.341759 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:55:54.341767 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:55:54.341776 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:55:54.341783 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:55:54.341790 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 15:55:54.341798 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:55:54.341805 kernel: cpuidle: using governor menu Feb 13 15:55:54.341812 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:55:54.341820 kernel: ASID allocator initialised with 32768 entries Feb 13 15:55:54.341827 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:55:54.341835 kernel: Serial: AMBA PL011 UART driver Feb 13 15:55:54.341843 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:55:54.341851 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:55:54.341859 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:55:54.341866 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:55:54.341873 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:55:54.341881 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:55:54.341889 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:55:54.341896 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:55:54.341903 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:55:54.341919 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:55:54.342044 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:55:54.342053 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:55:54.342060 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:55:54.342068 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:55:54.342075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:55:54.342083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:55:54.342090 kernel: ACPI: Interpreter enabled Feb 13 15:55:54.342097 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:55:54.342105 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:55:54.342115 kernel: printk: console [ttyAMA0] enabled Feb 13 15:55:54.342122 kernel: printk: bootconsole [pl11] disabled Feb 13 15:55:54.342129 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 15:55:54.342137 kernel: iommu: Default domain type: Translated Feb 13 15:55:54.342144 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:55:54.342151 kernel: efivars: Registered efivars operations Feb 13 15:55:54.342159 kernel: vgaarb: loaded Feb 13 15:55:54.342166 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:55:54.342173 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:55:54.342183 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:55:54.342190 kernel: pnp: PnP ACPI init Feb 13 15:55:54.342197 kernel: pnp: PnP ACPI: found 0 devices Feb 13 15:55:54.342204 kernel: NET: Registered PF_INET protocol family Feb 13 15:55:54.342212 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:55:54.342219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:55:54.342227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:55:54.342234 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:55:54.342243 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:55:54.342251 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:55:54.342274 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.342283 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.342291 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:55:54.342298 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:55:54.342306 kernel: kvm [1]: HYP mode not available Feb 13 15:55:54.342313 kernel: Initialise system trusted keyrings Feb 13 15:55:54.342321 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:55:54.342330 kernel: Key type asymmetric registered Feb 13 15:55:54.342338 kernel: Asymmetric key parser 'x509' registered Feb 13 15:55:54.342345 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:55:54.342353 kernel: io scheduler mq-deadline registered Feb 13 15:55:54.342360 kernel: io scheduler kyber registered Feb 13 15:55:54.342367 kernel: io scheduler bfq registered Feb 13 15:55:54.342374 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:55:54.342382 kernel: thunder_xcv, ver 1.0 Feb 13 15:55:54.342389 kernel: thunder_bgx, ver 1.0 Feb 13 15:55:54.342396 kernel: nicpf, ver 1.0 Feb 13 15:55:54.342405 kernel: nicvf, ver 1.0 Feb 13 15:55:54.342571 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:55:54.342642 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:55:53 UTC (1739462153) Feb 13 15:55:54.342652 kernel: efifb: probing for efifb Feb 13 15:55:54.342660 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:55:54.342667 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:55:54.342675 kernel: efifb: scrolling: redraw Feb 13 15:55:54.342684 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:55:54.342692 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:55:54.342699 kernel: fb0: EFI VGA frame buffer device Feb 13 15:55:54.342707 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 15:55:54.342714 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:55:54.342721 kernel: No ACPI PMU IRQ for CPU0 Feb 13 15:55:54.342729 kernel: No ACPI PMU IRQ for CPU1 Feb 13 15:55:54.342736 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 15:55:54.342743 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:55:54.342752 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:55:54.342760 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:55:54.342767 kernel: Segment Routing with IPv6 Feb 13 15:55:54.342775 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:55:54.342782 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:55:54.342789 kernel: Key type dns_resolver registered Feb 13 15:55:54.342796 kernel: registered taskstats version 1 Feb 13 15:55:54.342804 kernel: Loading compiled-in X.509 certificates Feb 13 15:55:54.342811 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:55:54.342818 kernel: Key type .fscrypt registered Feb 13 15:55:54.342827 kernel: Key type fscrypt-provisioning registered Feb 13 15:55:54.342835 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:55:54.342842 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:55:54.342850 kernel: ima: No architecture policies found Feb 13 15:55:54.342857 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:55:54.342864 kernel: clk: Disabling unused clocks Feb 13 15:55:54.342871 kernel: Freeing unused kernel memory: 39680K Feb 13 15:55:54.342879 kernel: Run /init as init process Feb 13 15:55:54.342887 kernel: with arguments: Feb 13 15:55:54.342894 kernel: /init Feb 13 15:55:54.342902 kernel: with environment: Feb 13 15:55:54.342909 kernel: HOME=/ Feb 13 15:55:54.342916 kernel: TERM=linux Feb 13 15:55:54.343719 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:55:54.343743 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:55:54.343754 systemd[1]: Detected virtualization microsoft. Feb 13 15:55:54.343767 systemd[1]: Detected architecture arm64. Feb 13 15:55:54.343775 systemd[1]: Running in initrd. Feb 13 15:55:54.343783 systemd[1]: No hostname configured, using default hostname. Feb 13 15:55:54.343790 systemd[1]: Hostname set to . Feb 13 15:55:54.343798 systemd[1]: Initializing machine ID from random generator. Feb 13 15:55:54.343806 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:55:54.343814 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:55:54.343822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:55:54.343833 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:55:54.343841 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:55:54.343849 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:55:54.343858 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:55:54.343867 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:55:54.343875 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:55:54.343883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:55:54.343893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:55:54.343901 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:55:54.343909 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:55:54.343917 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:55:54.343938 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:55:54.343946 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:55:54.343957 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:55:54.343966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:55:54.343974 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:55:54.343984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:55:54.343993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:55:54.344001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:55:54.344009 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:55:54.344017 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:55:54.344025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:55:54.344033 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:55:54.344041 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:55:54.344050 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:55:54.344059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:55:54.344097 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 15:55:54.344118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:54.344129 systemd-journald[218]: Journal started Feb 13 15:55:54.344148 systemd-journald[218]: Runtime Journal (/run/log/journal/b54f09a69a984b6083a7f752c6ff6549) is 8.0M, max 78.5M, 70.5M free. Feb 13 15:55:54.341226 systemd-modules-load[219]: Inserted module 'overlay' Feb 13 15:55:54.374787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:55:54.374848 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:55:54.374863 kernel: Bridge firewalling registered Feb 13 15:55:54.378159 systemd-modules-load[219]: Inserted module 'br_netfilter' Feb 13 15:55:54.390872 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:55:54.397715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:55:54.406953 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:55:54.417947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:55:54.433951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:54.461232 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:54.477156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:55:54.496455 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:55:54.510304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:55:54.531005 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:54.538409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:55:54.557496 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:55:54.566212 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:55:54.592164 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:55:54.606619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:55:54.623606 dracut-cmdline[251]: dracut-dracut-053 Feb 13 15:55:54.621397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:55:54.644266 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:55:54.677805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:55:54.711185 systemd-resolved[264]: Positive Trust Anchors: Feb 13 15:55:54.715797 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:55:54.715835 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:55:54.718374 systemd-resolved[264]: Defaulting to hostname 'linux'. Feb 13 15:55:54.719400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:55:54.736524 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:55:54.809942 kernel: SCSI subsystem initialized Feb 13 15:55:54.817954 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:55:54.827948 kernel: iscsi: registered transport (tcp) Feb 13 15:55:54.847382 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:55:54.847448 kernel: QLogic iSCSI HBA Driver Feb 13 15:55:54.881330 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:55:54.898199 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:55:54.933106 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:55:54.933170 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:55:54.939738 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:55:54.988952 kernel: raid6: neonx8 gen() 15761 MB/s Feb 13 15:55:55.008939 kernel: raid6: neonx4 gen() 15648 MB/s Feb 13 15:55:55.028938 kernel: raid6: neonx2 gen() 13242 MB/s Feb 13 15:55:55.050942 kernel: raid6: neonx1 gen() 10485 MB/s Feb 13 15:55:55.070935 kernel: raid6: int64x8 gen() 6950 MB/s Feb 13 15:55:55.090933 kernel: raid6: int64x4 gen() 7353 MB/s Feb 13 15:55:55.111939 kernel: raid6: int64x2 gen() 6133 MB/s Feb 13 15:55:55.135198 kernel: raid6: int64x1 gen() 5061 MB/s Feb 13 15:55:55.135224 kernel: raid6: using algorithm neonx8 gen() 15761 MB/s Feb 13 15:55:55.160082 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Feb 13 15:55:55.160097 kernel: raid6: using neon recovery algorithm Feb 13 15:55:55.168937 kernel: xor: measuring software checksum speed Feb 13 15:55:55.175758 kernel: 8regs : 18182 MB/sec Feb 13 15:55:55.175780 kernel: 32regs : 19679 MB/sec Feb 13 15:55:55.179091 kernel: arm64_neon : 27025 MB/sec Feb 13 15:55:55.183053 kernel: xor: using function: arm64_neon (27025 MB/sec) Feb 13 15:55:55.233948 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:55:55.245817 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:55:55.263115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:55:55.285429 systemd-udevd[438]: Using default interface naming scheme 'v255'. Feb 13 15:55:55.291613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:55:55.310228 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:55:55.327962 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Feb 13 15:55:55.357470 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:55:55.372076 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:55:55.412974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:55:55.436228 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:55:55.462004 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:55:55.474421 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:55:55.488549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:55:55.501428 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:55:55.523051 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 15:55:55.523258 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:55:55.552881 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:55:55.573990 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:55:55.574021 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:55:55.574031 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:55:55.591469 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:55:55.591520 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:55:55.591531 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:55:55.603426 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:55:55.603471 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:55:55.606468 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:55:55.606634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:55.618716 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:55.675494 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:55:55.675529 kernel: PTP clock support registered Feb 13 15:55:55.675540 kernel: scsi host1: storvsc_host_t Feb 13 15:55:55.675700 kernel: scsi host0: storvsc_host_t Feb 13 15:55:55.632191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:55:55.690314 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:55:55.632414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:55.715317 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: VF slot 1 added Feb 13 15:55:55.715551 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:55:55.648862 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:55.701838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:55.735805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:55.758223 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:55:55.758248 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:55:55.758258 kernel: hv_pci f7f73fca-642c-4529-b034-3d326710d71d: PCI VMBus probing: Using version 0x10004 Feb 13 15:55:55.622225 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:55:55.631236 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:55:55.632571 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:55:55.632591 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:55:55.632599 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:55:55.632607 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:55:55.632723 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:55:55.632732 kernel: hv_pci f7f73fca-642c-4529-b034-3d326710d71d: PCI host bridge to bus 642c:00 Feb 13 15:55:55.632822 kernel: pci_bus 642c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 15:55:55.632915 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:55:55.633005 kernel: pci_bus 642c:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:55:55.633079 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:55:55.633158 kernel: pci 642c:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 15:55:55.633271 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:55:55.633964 kernel: pci 642c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:55:55.634068 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:55:55.634155 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:55:55.634238 kernel: pci 642c:00:02.0: enabling Extended Tags Feb 13 15:55:55.634356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:55.634369 kernel: pci 642c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 642c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 15:55:55.634468 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:55:55.634568 kernel: pci_bus 642c:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:55:55.634651 kernel: pci 642c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:55:55.634739 systemd-journald[218]: Time jumped backwards, rotating. Feb 13 15:55:55.791154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:55.510336 systemd-resolved[264]: Clock change detected. Flushing caches. Feb 13 15:55:55.567120 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:55.673395 kernel: mlx5_core 642c:00:02.0: enabling device (0000 -> 0002) Feb 13 15:55:55.976620 kernel: mlx5_core 642c:00:02.0: firmware version: 16.30.1284 Feb 13 15:55:55.976818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (494) Feb 13 15:55:55.976829 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (495) Feb 13 15:55:55.976838 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:55.976846 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:55.976860 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: VF registering: eth1 Feb 13 15:55:55.977559 kernel: mlx5_core 642c:00:02.0 eth1: joined to eth0 Feb 13 15:55:55.977724 kernel: mlx5_core 642c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 15:55:55.756178 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:55:55.993381 kernel: mlx5_core 642c:00:02.0 enP25644s1: renamed from eth1 Feb 13 15:55:55.807134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:55:55.826642 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:55:55.838573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:55:55.845901 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:55:55.860458 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:55:56.892373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:56.892906 disk-uuid[605]: The operation has completed successfully. Feb 13 15:55:56.966990 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:55:56.968328 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:55:57.001527 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:55:57.014674 sh[694]: Success Feb 13 15:55:57.029403 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:55:57.091428 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:55:57.120443 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:55:57.126379 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:55:57.161424 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:55:57.161482 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:57.168204 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:55:57.173314 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:55:57.177591 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:55:57.237094 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:55:57.242504 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:55:57.258561 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:55:57.270718 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:55:57.303013 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:57.303065 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:57.307694 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:55:57.319905 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:55:57.334810 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:55:57.340315 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:57.347466 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:55:57.366565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:55:57.373478 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:55:57.398470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:55:57.438232 systemd-networkd[878]: lo: Link UP Feb 13 15:55:57.438247 systemd-networkd[878]: lo: Gained carrier Feb 13 15:55:57.439904 systemd-networkd[878]: Enumeration completed Feb 13 15:55:57.440010 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:55:57.446385 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:57.446389 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:55:57.457821 systemd[1]: Reached target network.target - Network. Feb 13 15:55:57.513324 kernel: mlx5_core 642c:00:02.0 enP25644s1: Link up Feb 13 15:55:57.557583 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: Data path switched to VF: enP25644s1 Feb 13 15:55:57.557820 systemd-networkd[878]: enP25644s1: Link UP Feb 13 15:55:57.557965 systemd-networkd[878]: eth0: Link UP Feb 13 15:55:57.558112 systemd-networkd[878]: eth0: Gained carrier Feb 13 15:55:57.558122 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:57.583823 systemd-networkd[878]: enP25644s1: Gained carrier Feb 13 15:55:57.596375 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:55:57.636484 ignition[873]: Ignition 2.20.0 Feb 13 15:55:57.636496 ignition[873]: Stage: fetch-offline Feb 13 15:55:57.638146 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:55:57.636533 ignition[873]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.636541 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.636629 ignition[873]: parsed url from cmdline: "" Feb 13 15:55:57.636632 ignition[873]: no config URL provided Feb 13 15:55:57.636637 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:55:57.667588 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:55:57.636645 ignition[873]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:55:57.636649 ignition[873]: failed to fetch config: resource requires networking Feb 13 15:55:57.636828 ignition[873]: Ignition finished successfully Feb 13 15:55:57.686712 ignition[887]: Ignition 2.20.0 Feb 13 15:55:57.686719 ignition[887]: Stage: fetch Feb 13 15:55:57.686928 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.686938 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.687052 ignition[887]: parsed url from cmdline: "" Feb 13 15:55:57.687056 ignition[887]: no config URL provided Feb 13 15:55:57.687061 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:55:57.687077 ignition[887]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:55:57.687104 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:55:57.819828 ignition[887]: GET result: OK Feb 13 15:55:57.819967 ignition[887]: config has been read from IMDS userdata Feb 13 15:55:57.820008 ignition[887]: parsing config with SHA512: 8fe69bc8447eabd9459a7c96fdad3031d210814c84f3ee83a131b0efbd18345179e4d0c6c24d4bfa8d66e1770185031eef8a9293b32a7b0305c99225a560b54e Feb 13 15:55:57.824895 unknown[887]: fetched base config from "system" Feb 13 15:55:57.825246 ignition[887]: fetch: fetch complete Feb 13 15:55:57.824904 unknown[887]: fetched base config from "system" Feb 13 15:55:57.825250 ignition[887]: fetch: fetch passed Feb 13 15:55:57.824909 unknown[887]: fetched user config from "azure" Feb 13 15:55:57.825309 ignition[887]: Ignition finished successfully Feb 13 15:55:57.830323 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:55:57.849547 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:55:57.875780 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:55:57.869949 ignition[894]: Ignition 2.20.0 Feb 13 15:55:57.869960 ignition[894]: Stage: kargs Feb 13 15:55:57.870165 ignition[894]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.870175 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.871180 ignition[894]: kargs: kargs passed Feb 13 15:55:57.871236 ignition[894]: Ignition finished successfully Feb 13 15:55:57.903627 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:55:57.925097 ignition[900]: Ignition 2.20.0 Feb 13 15:55:57.925109 ignition[900]: Stage: disks Feb 13 15:55:57.929657 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:55:57.925270 ignition[900]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.935472 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:55:57.925279 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.943955 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:55:57.926200 ignition[900]: disks: disks passed Feb 13 15:55:57.955439 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:55:57.926247 ignition[900]: Ignition finished successfully Feb 13 15:55:57.965570 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:55:57.976826 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:55:58.002568 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:55:58.049020 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:55:58.062029 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:55:58.078564 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:55:58.173313 kernel: EXT4-fs (sda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:55:58.173636 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:55:58.178423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:55:58.201370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:55:58.211259 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:55:58.218520 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:55:58.229190 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:55:58.229238 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:55:58.263803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:55:58.280237 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (920) Feb 13 15:55:58.280537 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:55:58.303698 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:58.303730 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:58.303740 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:55:58.317730 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:55:58.318718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:55:58.418924 coreos-metadata[922]: Feb 13 15:55:58.418 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:55:58.427340 coreos-metadata[922]: Feb 13 15:55:58.426 INFO Fetch successful Feb 13 15:55:58.427340 coreos-metadata[922]: Feb 13 15:55:58.426 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:55:58.444556 coreos-metadata[922]: Feb 13 15:55:58.444 INFO Fetch successful Feb 13 15:55:58.450628 coreos-metadata[922]: Feb 13 15:55:58.449 INFO wrote hostname ci-4152.2.1-a-d82c5cac77 to /sysroot/etc/hostname Feb 13 15:55:58.459769 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:55:58.503480 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:55:58.513238 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:55:58.525821 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:55:58.534658 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:55:58.783504 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:55:58.800496 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:55:58.808212 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:55:58.834418 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:55:58.840072 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:58.856798 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:55:58.873790 ignition[1042]: INFO : Ignition 2.20.0 Feb 13 15:55:58.873790 ignition[1042]: INFO : Stage: mount Feb 13 15:55:58.888718 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:58.888718 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:58.888718 ignition[1042]: INFO : mount: mount passed Feb 13 15:55:58.888718 ignition[1042]: INFO : Ignition finished successfully Feb 13 15:55:58.879082 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:55:58.884468 systemd-networkd[878]: enP25644s1: Gained IPv6LL Feb 13 15:55:58.901577 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:55:59.180470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:55:59.204770 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Feb 13 15:55:59.204826 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:59.210921 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:59.215241 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:55:59.221308 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:55:59.223169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:55:59.250323 ignition[1068]: INFO : Ignition 2.20.0 Feb 13 15:55:59.250323 ignition[1068]: INFO : Stage: files Feb 13 15:55:59.250323 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:59.250323 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:59.271126 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:55:59.271126 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:55:59.271126 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:55:59.301706 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:55:59.309252 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:55:59.309252 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:55:59.303756 unknown[1068]: wrote ssh authorized keys file for user: core Feb 13 15:55:59.329178 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:55:59.329178 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:55:59.397415 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:55:59.519413 systemd-networkd[878]: eth0: Gained IPv6LL Feb 13 15:55:59.589478 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:56:00.057284 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:56:00.232802 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:56:00.232802 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: files passed Feb 13 15:56:00.257382 ignition[1068]: INFO : Ignition finished successfully Feb 13 15:56:00.251975 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:56:00.286018 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:56:00.295515 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:56:00.326889 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:56:00.401902 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:56:00.401902 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:56:00.326986 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:56:00.431838 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:56:00.336062 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:56:00.351730 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:56:00.368545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:56:00.412211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:56:00.412372 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:56:00.426582 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:56:00.437762 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:56:00.451669 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:56:00.471552 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:56:00.513418 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:56:00.536678 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:56:00.551902 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:56:00.558458 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:56:00.570629 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:56:00.582682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:56:00.582821 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:56:00.599359 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:56:00.604967 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:56:00.615939 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:56:00.627191 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:56:00.637859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:56:00.649788 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:56:00.661759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:56:00.674792 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:56:00.685517 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:56:00.697421 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:56:00.706891 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:56:00.707016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:56:00.721758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:56:00.728075 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:56:00.740788 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:56:00.746339 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:56:00.759081 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:56:00.759209 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:56:00.776039 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:56:00.776182 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:56:00.790115 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:56:00.790212 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:56:00.800473 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:56:00.800571 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:56:00.861193 ignition[1121]: INFO : Ignition 2.20.0 Feb 13 15:56:00.861193 ignition[1121]: INFO : Stage: umount Feb 13 15:56:00.861193 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:56:00.861193 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:56:00.861193 ignition[1121]: INFO : umount: umount passed Feb 13 15:56:00.861193 ignition[1121]: INFO : Ignition finished successfully Feb 13 15:56:00.830616 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:56:00.841495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:56:00.847474 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:56:00.874513 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:56:00.885471 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:56:00.885649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:56:00.903724 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:56:00.903846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:56:00.927160 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:56:00.930042 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:56:00.931332 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:56:00.944255 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:56:00.944539 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:56:00.961753 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:56:00.961818 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:56:00.973410 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:56:00.973471 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:56:00.984233 systemd[1]: Stopped target network.target - Network. Feb 13 15:56:00.993710 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:56:00.993778 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:56:01.005669 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:56:01.016517 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:56:01.020334 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:56:01.028110 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:56:01.038437 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:56:01.048193 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:56:01.048251 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:56:01.058194 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:56:01.058244 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:56:01.069768 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:56:01.069830 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:56:01.081940 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:56:01.082014 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:56:01.091840 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:56:01.104759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:56:01.117349 systemd-networkd[878]: eth0: DHCPv6 lease lost Feb 13 15:56:01.119147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:56:01.120738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:56:01.133016 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:56:01.133156 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:56:01.147456 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:56:01.147629 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:56:01.168525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:56:01.168595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:56:01.198790 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:56:01.209356 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:56:01.209437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:56:01.374528 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: Data path switched from VF: enP25644s1 Feb 13 15:56:01.221390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:56:01.221452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:56:01.232162 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:56:01.232215 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:56:01.243518 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:56:01.243570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:56:01.256225 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:56:01.310761 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:56:01.310957 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:56:01.324409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:56:01.324492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:56:01.336089 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:56:01.336138 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:56:01.356494 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:56:01.356563 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:56:01.374625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:56:01.374690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:56:01.386129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:56:01.386208 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:56:01.429610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:56:01.441365 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:56:01.441454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:56:01.456719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:56:01.456782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:56:01.468456 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:56:01.470324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:56:01.479088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:56:01.479180 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:56:01.607609 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:56:01.607739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:56:01.618393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:56:01.628303 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:56:01.628381 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:56:01.652592 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:56:01.669027 systemd[1]: Switching root. Feb 13 15:56:01.706270 systemd-journald[218]: Journal stopped Feb 13 15:55:54.337626 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:55:54.337649 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:55:54.337657 kernel: KASLR enabled Feb 13 15:55:54.337663 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 15:55:54.337670 kernel: printk: bootconsole [pl11] enabled Feb 13 15:55:54.337676 kernel: efi: EFI v2.7 by EDK II Feb 13 15:55:54.337683 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 Feb 13 15:55:54.337689 kernel: random: crng init done Feb 13 15:55:54.337695 kernel: secureboot: Secure boot disabled Feb 13 15:55:54.337701 kernel: ACPI: Early table checksum verification disabled Feb 13 15:55:54.337707 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 15:55:54.337713 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337719 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337727 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 15:55:54.337734 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337741 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337747 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337755 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337761 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337767 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337774 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 15:55:54.337780 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:55:54.337786 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 15:55:54.337793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 15:55:54.337799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 15:55:54.337805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 15:55:54.337812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 15:55:54.337818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 15:55:54.337825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 15:55:54.337832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 15:55:54.337838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 15:55:54.337844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 15:55:54.337850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 15:55:54.337857 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 15:55:54.337863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 15:55:54.337869 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Feb 13 15:55:54.337875 kernel: Zone ranges: Feb 13 15:55:54.337882 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 15:55:54.337888 kernel: DMA32 empty Feb 13 15:55:54.337894 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:55:54.337904 kernel: Movable zone start for each node Feb 13 15:55:54.337910 kernel: Early memory node ranges Feb 13 15:55:54.337917 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 15:55:54.340995 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Feb 13 15:55:54.341017 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 15:55:54.341032 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 15:55:54.341039 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 15:55:54.341046 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 15:55:54.341053 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:55:54.341061 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 15:55:54.341068 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 15:55:54.341075 kernel: psci: probing for conduit method from ACPI. Feb 13 15:55:54.341081 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:55:54.341088 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:55:54.341095 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 15:55:54.341101 kernel: psci: SMC Calling Convention v1.4 Feb 13 15:55:54.341108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 15:55:54.341117 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 15:55:54.341123 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:55:54.341130 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:55:54.341137 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:55:54.341144 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:55:54.341151 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:55:54.341157 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:55:54.341164 kernel: CPU features: detected: Spectre-BHB Feb 13 15:55:54.341177 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:55:54.341185 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:55:54.341192 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:55:54.341200 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 15:55:54.341207 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:55:54.341214 kernel: alternatives: applying boot alternatives Feb 13 15:55:54.341223 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:55:54.341230 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:55:54.341237 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:55:54.341244 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:55:54.341261 kernel: Fallback order for Node 0: 0 Feb 13 15:55:54.341267 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 15:55:54.341274 kernel: Policy zone: Normal Feb 13 15:55:54.341280 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:55:54.341289 kernel: software IO TLB: area num 2. Feb 13 15:55:54.341296 kernel: software IO TLB: mapped [mem 0x0000000036630000-0x000000003a630000] (64MB) Feb 13 15:55:54.341303 kernel: Memory: 3982436K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 211724K reserved, 0K cma-reserved) Feb 13 15:55:54.341310 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:55:54.341316 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:55:54.341324 kernel: rcu: RCU event tracing is enabled. Feb 13 15:55:54.341331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:55:54.341338 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:55:54.341345 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:55:54.341352 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:55:54.341358 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:55:54.341367 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:55:54.341373 kernel: GICv3: 960 SPIs implemented Feb 13 15:55:54.341380 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:55:54.341387 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:55:54.341393 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:55:54.341400 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 15:55:54.341407 kernel: ITS: No ITS available, not enabling LPIs Feb 13 15:55:54.341414 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:55:54.341421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:55:54.341428 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:55:54.341435 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:55:54.341442 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:55:54.341450 kernel: Console: colour dummy device 80x25 Feb 13 15:55:54.341457 kernel: printk: console [tty1] enabled Feb 13 15:55:54.341464 kernel: ACPI: Core revision 20230628 Feb 13 15:55:54.341471 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:55:54.341478 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:55:54.341485 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:55:54.341492 kernel: landlock: Up and running. Feb 13 15:55:54.341499 kernel: SELinux: Initializing. Feb 13 15:55:54.341506 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.341515 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.341522 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:55:54.341529 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:55:54.341536 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 15:55:54.341543 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 15:55:54.341550 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:55:54.341557 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:55:54.341570 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:55:54.341577 kernel: Remapping and enabling EFI services. Feb 13 15:55:54.341585 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:55:54.341592 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:55:54.341599 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 15:55:54.341608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:55:54.341615 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:55:54.341623 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:55:54.341630 kernel: SMP: Total of 2 processors activated. Feb 13 15:55:54.341637 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:55:54.341647 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 15:55:54.341654 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:55:54.341661 kernel: CPU features: detected: CRC32 instructions Feb 13 15:55:54.341669 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:55:54.341676 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:55:54.341683 kernel: CPU features: detected: Privileged Access Never Feb 13 15:55:54.341690 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:55:54.341698 kernel: alternatives: applying system-wide alternatives Feb 13 15:55:54.341705 kernel: devtmpfs: initialized Feb 13 15:55:54.341714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:55:54.341722 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:55:54.341729 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:55:54.341737 kernel: SMBIOS 3.1.0 present. Feb 13 15:55:54.341745 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 15:55:54.341752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:55:54.341759 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:55:54.341767 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:55:54.341776 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:55:54.341783 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:55:54.341790 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 15:55:54.341798 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:55:54.341805 kernel: cpuidle: using governor menu Feb 13 15:55:54.341812 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:55:54.341820 kernel: ASID allocator initialised with 32768 entries Feb 13 15:55:54.341827 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:55:54.341835 kernel: Serial: AMBA PL011 UART driver Feb 13 15:55:54.341843 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:55:54.341851 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:55:54.341859 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:55:54.341866 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:55:54.341873 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:55:54.341881 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:55:54.341889 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:55:54.341896 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:55:54.341903 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:55:54.341919 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:55:54.342044 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:55:54.342053 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:55:54.342060 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:55:54.342068 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:55:54.342075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:55:54.342083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:55:54.342090 kernel: ACPI: Interpreter enabled Feb 13 15:55:54.342097 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:55:54.342105 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:55:54.342115 kernel: printk: console [ttyAMA0] enabled Feb 13 15:55:54.342122 kernel: printk: bootconsole [pl11] disabled Feb 13 15:55:54.342129 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 15:55:54.342137 kernel: iommu: Default domain type: Translated Feb 13 15:55:54.342144 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:55:54.342151 kernel: efivars: Registered efivars operations Feb 13 15:55:54.342159 kernel: vgaarb: loaded Feb 13 15:55:54.342166 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:55:54.342173 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:55:54.342183 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:55:54.342190 kernel: pnp: PnP ACPI init Feb 13 15:55:54.342197 kernel: pnp: PnP ACPI: found 0 devices Feb 13 15:55:54.342204 kernel: NET: Registered PF_INET protocol family Feb 13 15:55:54.342212 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:55:54.342219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:55:54.342227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:55:54.342234 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:55:54.342243 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:55:54.342251 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:55:54.342274 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.342283 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:55:54.342291 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:55:54.342298 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:55:54.342306 kernel: kvm [1]: HYP mode not available Feb 13 15:55:54.342313 kernel: Initialise system trusted keyrings Feb 13 15:55:54.342321 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:55:54.342330 kernel: Key type asymmetric registered Feb 13 15:55:54.342338 kernel: Asymmetric key parser 'x509' registered Feb 13 15:55:54.342345 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:55:54.342353 kernel: io scheduler mq-deadline registered Feb 13 15:55:54.342360 kernel: io scheduler kyber registered Feb 13 15:55:54.342367 kernel: io scheduler bfq registered Feb 13 15:55:54.342374 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:55:54.342382 kernel: thunder_xcv, ver 1.0 Feb 13 15:55:54.342389 kernel: thunder_bgx, ver 1.0 Feb 13 15:55:54.342396 kernel: nicpf, ver 1.0 Feb 13 15:55:54.342405 kernel: nicvf, ver 1.0 Feb 13 15:55:54.342571 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:55:54.342642 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:55:53 UTC (1739462153) Feb 13 15:55:54.342652 kernel: efifb: probing for efifb Feb 13 15:55:54.342660 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:55:54.342667 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:55:54.342675 kernel: efifb: scrolling: redraw Feb 13 15:55:54.342684 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:55:54.342692 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:55:54.342699 kernel: fb0: EFI VGA frame buffer device Feb 13 15:55:54.342707 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 15:55:54.342714 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:55:54.342721 kernel: No ACPI PMU IRQ for CPU0 Feb 13 15:55:54.342729 kernel: No ACPI PMU IRQ for CPU1 Feb 13 15:55:54.342736 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 15:55:54.342743 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:55:54.342752 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:55:54.342760 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:55:54.342767 kernel: Segment Routing with IPv6 Feb 13 15:55:54.342775 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:55:54.342782 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:55:54.342789 kernel: Key type dns_resolver registered Feb 13 15:55:54.342796 kernel: registered taskstats version 1 Feb 13 15:55:54.342804 kernel: Loading compiled-in X.509 certificates Feb 13 15:55:54.342811 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:55:54.342818 kernel: Key type .fscrypt registered Feb 13 15:55:54.342827 kernel: Key type fscrypt-provisioning registered Feb 13 15:55:54.342835 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:55:54.342842 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:55:54.342850 kernel: ima: No architecture policies found Feb 13 15:55:54.342857 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:55:54.342864 kernel: clk: Disabling unused clocks Feb 13 15:55:54.342871 kernel: Freeing unused kernel memory: 39680K Feb 13 15:55:54.342879 kernel: Run /init as init process Feb 13 15:55:54.342887 kernel: with arguments: Feb 13 15:55:54.342894 kernel: /init Feb 13 15:55:54.342902 kernel: with environment: Feb 13 15:55:54.342909 kernel: HOME=/ Feb 13 15:55:54.342916 kernel: TERM=linux Feb 13 15:55:54.343719 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:55:54.343743 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:55:54.343754 systemd[1]: Detected virtualization microsoft. Feb 13 15:55:54.343767 systemd[1]: Detected architecture arm64. Feb 13 15:55:54.343775 systemd[1]: Running in initrd. Feb 13 15:55:54.343783 systemd[1]: No hostname configured, using default hostname. Feb 13 15:55:54.343790 systemd[1]: Hostname set to . Feb 13 15:55:54.343798 systemd[1]: Initializing machine ID from random generator. Feb 13 15:55:54.343806 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:55:54.343814 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:55:54.343822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:55:54.343833 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:55:54.343841 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:55:54.343849 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:55:54.343858 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:55:54.343867 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:55:54.343875 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:55:54.343883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:55:54.343893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:55:54.343901 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:55:54.343909 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:55:54.343917 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:55:54.343938 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:55:54.343946 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:55:54.343957 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:55:54.343966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:55:54.343974 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:55:54.343984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:55:54.343993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:55:54.344001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:55:54.344009 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:55:54.344017 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:55:54.344025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:55:54.344033 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:55:54.344041 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:55:54.344050 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:55:54.344059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:55:54.344097 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 15:55:54.344118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:54.344129 systemd-journald[218]: Journal started Feb 13 15:55:54.344148 systemd-journald[218]: Runtime Journal (/run/log/journal/b54f09a69a984b6083a7f752c6ff6549) is 8.0M, max 78.5M, 70.5M free. Feb 13 15:55:54.341226 systemd-modules-load[219]: Inserted module 'overlay' Feb 13 15:55:54.374787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:55:54.374848 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:55:54.374863 kernel: Bridge firewalling registered Feb 13 15:55:54.378159 systemd-modules-load[219]: Inserted module 'br_netfilter' Feb 13 15:55:54.390872 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:55:54.397715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:55:54.406953 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:55:54.417947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:55:54.433951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:54.461232 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:54.477156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:55:54.496455 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:55:54.510304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:55:54.531005 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:54.538409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:55:54.557496 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:55:54.566212 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:55:54.592164 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:55:54.606619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:55:54.623606 dracut-cmdline[251]: dracut-dracut-053 Feb 13 15:55:54.621397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:55:54.644266 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:55:54.677805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:55:54.711185 systemd-resolved[264]: Positive Trust Anchors: Feb 13 15:55:54.715797 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:55:54.715835 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:55:54.718374 systemd-resolved[264]: Defaulting to hostname 'linux'. Feb 13 15:55:54.719400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:55:54.736524 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:55:54.809942 kernel: SCSI subsystem initialized Feb 13 15:55:54.817954 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:55:54.827948 kernel: iscsi: registered transport (tcp) Feb 13 15:55:54.847382 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:55:54.847448 kernel: QLogic iSCSI HBA Driver Feb 13 15:55:54.881330 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:55:54.898199 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:55:54.933106 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:55:54.933170 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:55:54.939738 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:55:54.988952 kernel: raid6: neonx8 gen() 15761 MB/s Feb 13 15:55:55.008939 kernel: raid6: neonx4 gen() 15648 MB/s Feb 13 15:55:55.028938 kernel: raid6: neonx2 gen() 13242 MB/s Feb 13 15:55:55.050942 kernel: raid6: neonx1 gen() 10485 MB/s Feb 13 15:55:55.070935 kernel: raid6: int64x8 gen() 6950 MB/s Feb 13 15:55:55.090933 kernel: raid6: int64x4 gen() 7353 MB/s Feb 13 15:55:55.111939 kernel: raid6: int64x2 gen() 6133 MB/s Feb 13 15:55:55.135198 kernel: raid6: int64x1 gen() 5061 MB/s Feb 13 15:55:55.135224 kernel: raid6: using algorithm neonx8 gen() 15761 MB/s Feb 13 15:55:55.160082 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Feb 13 15:55:55.160097 kernel: raid6: using neon recovery algorithm Feb 13 15:55:55.168937 kernel: xor: measuring software checksum speed Feb 13 15:55:55.175758 kernel: 8regs : 18182 MB/sec Feb 13 15:55:55.175780 kernel: 32regs : 19679 MB/sec Feb 13 15:55:55.179091 kernel: arm64_neon : 27025 MB/sec Feb 13 15:55:55.183053 kernel: xor: using function: arm64_neon (27025 MB/sec) Feb 13 15:55:55.233948 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:55:55.245817 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:55:55.263115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:55:55.285429 systemd-udevd[438]: Using default interface naming scheme 'v255'. Feb 13 15:55:55.291613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:55:55.310228 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:55:55.327962 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Feb 13 15:55:55.357470 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:55:55.372076 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:55:55.412974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:55:55.436228 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:55:55.462004 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:55:55.474421 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:55:55.488549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:55:55.501428 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:55:55.523051 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 15:55:55.523258 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:55:55.552881 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:55:55.573990 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:55:55.574021 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:55:55.574031 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:55:55.591469 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:55:55.591520 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:55:55.591531 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:55:55.603426 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:55:55.603471 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:55:55.606468 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:55:55.606634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:55.618716 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:55.675494 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:55:55.675529 kernel: PTP clock support registered Feb 13 15:55:55.675540 kernel: scsi host1: storvsc_host_t Feb 13 15:55:55.675700 kernel: scsi host0: storvsc_host_t Feb 13 15:55:55.632191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:55:55.690314 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:55:55.632414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:55.715317 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: VF slot 1 added Feb 13 15:55:55.715551 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:55:55.648862 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:55.701838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:55.735805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:55.758223 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:55:55.758248 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:55:55.758258 kernel: hv_pci f7f73fca-642c-4529-b034-3d326710d71d: PCI VMBus probing: Using version 0x10004 Feb 13 15:55:55.622225 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:55:55.631236 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:55:55.632571 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:55:55.632591 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:55:55.632599 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:55:55.632607 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:55:55.632723 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:55:55.632732 kernel: hv_pci f7f73fca-642c-4529-b034-3d326710d71d: PCI host bridge to bus 642c:00 Feb 13 15:55:55.632822 kernel: pci_bus 642c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 15:55:55.632915 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:55:55.633005 kernel: pci_bus 642c:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:55:55.633079 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:55:55.633158 kernel: pci 642c:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 15:55:55.633271 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:55:55.633964 kernel: pci 642c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:55:55.634068 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:55:55.634155 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:55:55.634238 kernel: pci 642c:00:02.0: enabling Extended Tags Feb 13 15:55:55.634356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:55.634369 kernel: pci 642c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 642c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 15:55:55.634468 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:55:55.634568 kernel: pci_bus 642c:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:55:55.634651 kernel: pci 642c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:55:55.634739 systemd-journald[218]: Time jumped backwards, rotating. Feb 13 15:55:55.791154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:55.510336 systemd-resolved[264]: Clock change detected. Flushing caches. Feb 13 15:55:55.567120 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:55.673395 kernel: mlx5_core 642c:00:02.0: enabling device (0000 -> 0002) Feb 13 15:55:55.976620 kernel: mlx5_core 642c:00:02.0: firmware version: 16.30.1284 Feb 13 15:55:55.976818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (494) Feb 13 15:55:55.976829 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (495) Feb 13 15:55:55.976838 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:55.976846 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:55.976860 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: VF registering: eth1 Feb 13 15:55:55.977559 kernel: mlx5_core 642c:00:02.0 eth1: joined to eth0 Feb 13 15:55:55.977724 kernel: mlx5_core 642c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 15:55:55.756178 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:55:55.993381 kernel: mlx5_core 642c:00:02.0 enP25644s1: renamed from eth1 Feb 13 15:55:55.807134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:55:55.826642 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:55:55.838573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:55:55.845901 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:55:55.860458 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:55:56.892373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:55:56.892906 disk-uuid[605]: The operation has completed successfully. Feb 13 15:55:56.966990 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:55:56.968328 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:55:57.001527 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:55:57.014674 sh[694]: Success Feb 13 15:55:57.029403 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:55:57.091428 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:55:57.120443 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:55:57.126379 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:55:57.161424 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:55:57.161482 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:57.168204 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:55:57.173314 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:55:57.177591 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:55:57.237094 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:55:57.242504 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:55:57.258561 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:55:57.270718 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:55:57.303013 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:57.303065 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:57.307694 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:55:57.319905 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:55:57.334810 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:55:57.340315 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:57.347466 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:55:57.366565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:55:57.373478 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:55:57.398470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:55:57.438232 systemd-networkd[878]: lo: Link UP Feb 13 15:55:57.438247 systemd-networkd[878]: lo: Gained carrier Feb 13 15:55:57.439904 systemd-networkd[878]: Enumeration completed Feb 13 15:55:57.440010 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:55:57.446385 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:57.446389 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:55:57.457821 systemd[1]: Reached target network.target - Network. Feb 13 15:55:57.513324 kernel: mlx5_core 642c:00:02.0 enP25644s1: Link up Feb 13 15:55:57.557583 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: Data path switched to VF: enP25644s1 Feb 13 15:55:57.557820 systemd-networkd[878]: enP25644s1: Link UP Feb 13 15:55:57.557965 systemd-networkd[878]: eth0: Link UP Feb 13 15:55:57.558112 systemd-networkd[878]: eth0: Gained carrier Feb 13 15:55:57.558122 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:57.583823 systemd-networkd[878]: enP25644s1: Gained carrier Feb 13 15:55:57.596375 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:55:57.636484 ignition[873]: Ignition 2.20.0 Feb 13 15:55:57.636496 ignition[873]: Stage: fetch-offline Feb 13 15:55:57.638146 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:55:57.636533 ignition[873]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.636541 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.636629 ignition[873]: parsed url from cmdline: "" Feb 13 15:55:57.636632 ignition[873]: no config URL provided Feb 13 15:55:57.636637 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:55:57.667588 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:55:57.636645 ignition[873]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:55:57.636649 ignition[873]: failed to fetch config: resource requires networking Feb 13 15:55:57.636828 ignition[873]: Ignition finished successfully Feb 13 15:55:57.686712 ignition[887]: Ignition 2.20.0 Feb 13 15:55:57.686719 ignition[887]: Stage: fetch Feb 13 15:55:57.686928 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.686938 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.687052 ignition[887]: parsed url from cmdline: "" Feb 13 15:55:57.687056 ignition[887]: no config URL provided Feb 13 15:55:57.687061 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:55:57.687077 ignition[887]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:55:57.687104 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:55:57.819828 ignition[887]: GET result: OK Feb 13 15:55:57.819967 ignition[887]: config has been read from IMDS userdata Feb 13 15:55:57.820008 ignition[887]: parsing config with SHA512: 8fe69bc8447eabd9459a7c96fdad3031d210814c84f3ee83a131b0efbd18345179e4d0c6c24d4bfa8d66e1770185031eef8a9293b32a7b0305c99225a560b54e Feb 13 15:55:57.824895 unknown[887]: fetched base config from "system" Feb 13 15:55:57.825246 ignition[887]: fetch: fetch complete Feb 13 15:55:57.824904 unknown[887]: fetched base config from "system" Feb 13 15:55:57.825250 ignition[887]: fetch: fetch passed Feb 13 15:55:57.824909 unknown[887]: fetched user config from "azure" Feb 13 15:55:57.825309 ignition[887]: Ignition finished successfully Feb 13 15:55:57.830323 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:55:57.849547 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:55:57.875780 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:55:57.869949 ignition[894]: Ignition 2.20.0 Feb 13 15:55:57.869960 ignition[894]: Stage: kargs Feb 13 15:55:57.870165 ignition[894]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.870175 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.871180 ignition[894]: kargs: kargs passed Feb 13 15:55:57.871236 ignition[894]: Ignition finished successfully Feb 13 15:55:57.903627 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:55:57.925097 ignition[900]: Ignition 2.20.0 Feb 13 15:55:57.925109 ignition[900]: Stage: disks Feb 13 15:55:57.929657 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:55:57.925270 ignition[900]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:57.935472 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:55:57.925279 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:57.943955 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:55:57.926200 ignition[900]: disks: disks passed Feb 13 15:55:57.955439 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:55:57.926247 ignition[900]: Ignition finished successfully Feb 13 15:55:57.965570 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:55:57.976826 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:55:58.002568 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:55:58.049020 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:55:58.062029 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:55:58.078564 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:55:58.173313 kernel: EXT4-fs (sda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:55:58.173636 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:55:58.178423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:55:58.201370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:55:58.211259 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:55:58.218520 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:55:58.229190 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:55:58.229238 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:55:58.263803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:55:58.280237 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (920) Feb 13 15:55:58.280537 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:55:58.303698 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:58.303730 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:58.303740 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:55:58.317730 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:55:58.318718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:55:58.418924 coreos-metadata[922]: Feb 13 15:55:58.418 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:55:58.427340 coreos-metadata[922]: Feb 13 15:55:58.426 INFO Fetch successful Feb 13 15:55:58.427340 coreos-metadata[922]: Feb 13 15:55:58.426 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:55:58.444556 coreos-metadata[922]: Feb 13 15:55:58.444 INFO Fetch successful Feb 13 15:55:58.450628 coreos-metadata[922]: Feb 13 15:55:58.449 INFO wrote hostname ci-4152.2.1-a-d82c5cac77 to /sysroot/etc/hostname Feb 13 15:55:58.459769 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:55:58.503480 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:55:58.513238 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:55:58.525821 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:55:58.534658 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:55:58.783504 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:55:58.800496 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:55:58.808212 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:55:58.834418 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:55:58.840072 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:58.856798 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:55:58.873790 ignition[1042]: INFO : Ignition 2.20.0 Feb 13 15:55:58.873790 ignition[1042]: INFO : Stage: mount Feb 13 15:55:58.888718 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:58.888718 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:58.888718 ignition[1042]: INFO : mount: mount passed Feb 13 15:55:58.888718 ignition[1042]: INFO : Ignition finished successfully Feb 13 15:55:58.879082 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:55:58.884468 systemd-networkd[878]: enP25644s1: Gained IPv6LL Feb 13 15:55:58.901577 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:55:59.180470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:55:59.204770 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Feb 13 15:55:59.204826 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:55:59.210921 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:55:59.215241 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:55:59.221308 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:55:59.223169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:55:59.250323 ignition[1068]: INFO : Ignition 2.20.0 Feb 13 15:55:59.250323 ignition[1068]: INFO : Stage: files Feb 13 15:55:59.250323 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:59.250323 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:55:59.271126 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:55:59.271126 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:55:59.271126 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:55:59.301706 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:55:59.309252 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:55:59.309252 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:55:59.303756 unknown[1068]: wrote ssh authorized keys file for user: core Feb 13 15:55:59.329178 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:55:59.329178 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:55:59.397415 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:55:59.519413 systemd-networkd[878]: eth0: Gained IPv6LL Feb 13 15:55:59.589478 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:55:59.600282 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:56:00.057284 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:56:00.232802 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:56:00.232802 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:56:00.257382 ignition[1068]: INFO : files: files passed Feb 13 15:56:00.257382 ignition[1068]: INFO : Ignition finished successfully Feb 13 15:56:00.251975 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:56:00.286018 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:56:00.295515 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:56:00.326889 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:56:00.401902 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:56:00.401902 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:56:00.326986 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:56:00.431838 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:56:00.336062 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:56:00.351730 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:56:00.368545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:56:00.412211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:56:00.412372 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:56:00.426582 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:56:00.437762 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:56:00.451669 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:56:00.471552 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:56:00.513418 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:56:00.536678 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:56:00.551902 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:56:00.558458 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:56:00.570629 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:56:00.582682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:56:00.582821 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:56:00.599359 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:56:00.604967 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:56:00.615939 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:56:00.627191 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:56:00.637859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:56:00.649788 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:56:00.661759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:56:00.674792 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:56:00.685517 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:56:00.697421 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:56:00.706891 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:56:00.707016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:56:00.721758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:56:00.728075 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:56:00.740788 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:56:00.746339 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:56:00.759081 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:56:00.759209 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:56:00.776039 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:56:00.776182 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:56:00.790115 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:56:00.790212 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:56:00.800473 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:56:00.800571 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:56:00.861193 ignition[1121]: INFO : Ignition 2.20.0 Feb 13 15:56:00.861193 ignition[1121]: INFO : Stage: umount Feb 13 15:56:00.861193 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:56:00.861193 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:56:00.861193 ignition[1121]: INFO : umount: umount passed Feb 13 15:56:00.861193 ignition[1121]: INFO : Ignition finished successfully Feb 13 15:56:00.830616 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:56:00.841495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:56:00.847474 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:56:00.874513 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:56:00.885471 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:56:00.885649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:56:00.903724 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:56:00.903846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:56:00.927160 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:56:00.930042 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:56:00.931332 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:56:00.944255 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:56:00.944539 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:56:00.961753 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:56:00.961818 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:56:00.973410 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:56:00.973471 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:56:00.984233 systemd[1]: Stopped target network.target - Network. Feb 13 15:56:00.993710 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:56:00.993778 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:56:01.005669 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:56:01.016517 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:56:01.020334 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:56:01.028110 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:56:01.038437 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:56:01.048193 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:56:01.048251 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:56:01.058194 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:56:01.058244 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:56:01.069768 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:56:01.069830 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:56:01.081940 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:56:01.082014 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:56:01.091840 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:56:01.104759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:56:01.117349 systemd-networkd[878]: eth0: DHCPv6 lease lost Feb 13 15:56:01.119147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:56:01.120738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:56:01.133016 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:56:01.133156 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:56:01.147456 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:56:01.147629 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:56:01.168525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:56:01.168595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:56:01.198790 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:56:01.209356 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:56:01.209437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:56:01.374528 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: Data path switched from VF: enP25644s1 Feb 13 15:56:01.221390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:56:01.221452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:56:01.232162 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:56:01.232215 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:56:01.243518 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:56:01.243570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:56:01.256225 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:56:01.310761 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:56:01.310957 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:56:01.324409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:56:01.324492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:56:01.336089 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:56:01.336138 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:56:01.356494 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:56:01.356563 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:56:01.374625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:56:01.374690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:56:01.386129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:56:01.386208 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:56:01.429610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:56:01.441365 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:56:01.441454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:56:01.456719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:56:01.456782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:56:01.468456 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:56:01.470324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:56:01.479088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:56:01.479180 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:56:01.607609 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:56:01.607739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:56:01.618393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:56:01.628303 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:56:01.628381 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:56:01.652592 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:56:01.669027 systemd[1]: Switching root. Feb 13 15:56:01.706270 systemd-journald[218]: Journal stopped Feb 13 15:56:03.642557 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 15:56:03.642585 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:56:03.642596 kernel: SELinux: policy capability open_perms=1 Feb 13 15:56:03.642606 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:56:03.642614 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:56:03.642622 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:56:03.642631 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:56:03.642639 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:56:03.642647 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:56:03.642658 kernel: audit: type=1403 audit(1739462162.023:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:56:03.642668 systemd[1]: Successfully loaded SELinux policy in 88.874ms. Feb 13 15:56:03.642678 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.700ms. Feb 13 15:56:03.642688 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:56:03.642697 systemd[1]: Detected virtualization microsoft. Feb 13 15:56:03.642707 systemd[1]: Detected architecture arm64. Feb 13 15:56:03.642717 systemd[1]: Detected first boot. Feb 13 15:56:03.642727 systemd[1]: Hostname set to . Feb 13 15:56:03.642736 systemd[1]: Initializing machine ID from random generator. Feb 13 15:56:03.642745 zram_generator::config[1164]: No configuration found. Feb 13 15:56:03.642755 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:56:03.642764 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:56:03.642775 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:56:03.642784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:56:03.642794 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:56:03.642803 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:56:03.642813 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:56:03.642822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:56:03.642831 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:56:03.642843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:56:03.642852 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:56:03.642862 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:56:03.642871 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:56:03.642880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:56:03.642889 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:56:03.642898 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:56:03.642908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:56:03.642917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:56:03.642928 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:56:03.642937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:56:03.642946 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:56:03.642958 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:56:03.642967 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:56:03.642977 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:56:03.642986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:56:03.642997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:56:03.643007 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:56:03.643016 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:56:03.643026 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:56:03.643035 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:56:03.643045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:56:03.643054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:56:03.643066 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:56:03.643076 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:56:03.643086 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:56:03.643095 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:56:03.643105 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:56:03.643114 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:56:03.643125 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:56:03.643135 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:56:03.643145 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:56:03.643154 systemd[1]: Reached target machines.target - Containers. Feb 13 15:56:03.643164 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:56:03.643174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:56:03.643183 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:56:03.643193 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:56:03.643205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:56:03.643214 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:56:03.643224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:56:03.643234 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:56:03.643244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:56:03.643253 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:56:03.643263 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:56:03.643273 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:56:03.643283 kernel: fuse: init (API version 7.39) Feb 13 15:56:03.643302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:56:03.643314 kernel: loop: module loaded Feb 13 15:56:03.643323 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:56:03.643332 kernel: ACPI: bus type drm_connector registered Feb 13 15:56:03.643341 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:56:03.643350 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:56:03.643378 systemd-journald[1267]: Collecting audit messages is disabled. Feb 13 15:56:03.643401 systemd-journald[1267]: Journal started Feb 13 15:56:03.643422 systemd-journald[1267]: Runtime Journal (/run/log/journal/7420069b02474c9bb5db5e6d36994bf2) is 8.0M, max 78.5M, 70.5M free. Feb 13 15:56:02.809641 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:56:02.841267 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:56:02.841630 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:56:02.841933 systemd[1]: systemd-journald.service: Consumed 3.043s CPU time. Feb 13 15:56:03.661829 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:56:03.679338 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:56:03.694493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:56:03.694830 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:56:03.703956 systemd[1]: Stopped verity-setup.service. Feb 13 15:56:03.720327 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:56:03.720813 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:56:03.726605 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:56:03.732794 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:56:03.738179 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:56:03.744225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:56:03.750445 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:56:03.757366 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:56:03.764182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:56:03.771506 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:56:03.771645 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:56:03.778360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:56:03.778500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:56:03.785166 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:56:03.785348 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:56:03.791826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:56:03.791986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:56:03.799169 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:56:03.799324 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:56:03.805615 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:56:03.805763 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:56:03.812256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:56:03.819707 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:56:03.826856 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:56:03.834073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:56:03.849477 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:56:03.862442 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:56:03.869907 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:56:03.876087 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:56:03.876130 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:56:03.883086 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:56:03.891108 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:56:03.898817 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:56:03.904391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:56:03.906514 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:56:03.915504 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:56:03.925404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:56:03.927589 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:56:03.933653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:56:03.937476 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:56:03.939998 systemd-journald[1267]: Time spent on flushing to /var/log/journal/7420069b02474c9bb5db5e6d36994bf2 is 104.177ms for 892 entries. Feb 13 15:56:03.939998 systemd-journald[1267]: System Journal (/var/log/journal/7420069b02474c9bb5db5e6d36994bf2) is 11.8M, max 2.6G, 2.6G free. Feb 13 15:56:04.136511 systemd-journald[1267]: Received client request to flush runtime journal. Feb 13 15:56:04.136565 systemd-journald[1267]: /var/log/journal/7420069b02474c9bb5db5e6d36994bf2/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Feb 13 15:56:04.136595 kernel: loop0: detected capacity change from 0 to 194512 Feb 13 15:56:04.136615 systemd-journald[1267]: Rotating system journal. Feb 13 15:56:04.136637 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:56:03.954783 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:56:03.965498 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:56:03.973527 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:56:03.984219 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:56:04.019077 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:56:04.035676 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:56:04.047373 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:56:04.056945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:56:04.083975 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:56:04.099596 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:56:04.107414 udevadm[1301]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:56:04.127840 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:56:04.138980 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:56:04.155921 kernel: loop1: detected capacity change from 0 to 28720 Feb 13 15:56:04.156878 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:56:04.188574 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:56:04.189439 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:56:04.206406 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Feb 13 15:56:04.206426 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Feb 13 15:56:04.212757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:56:04.280328 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 15:56:04.382323 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 15:56:04.471379 kernel: loop4: detected capacity change from 0 to 194512 Feb 13 15:56:04.481357 kernel: loop5: detected capacity change from 0 to 28720 Feb 13 15:56:04.490348 kernel: loop6: detected capacity change from 0 to 116808 Feb 13 15:56:04.504385 kernel: loop7: detected capacity change from 0 to 113536 Feb 13 15:56:04.507499 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 15:56:04.507949 (sd-merge)[1325]: Merged extensions into '/usr'. Feb 13 15:56:04.512192 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:56:04.512362 systemd[1]: Reloading... Feb 13 15:56:04.628331 zram_generator::config[1357]: No configuration found. Feb 13 15:56:04.739145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:56:04.794895 systemd[1]: Reloading finished in 282 ms. Feb 13 15:56:04.830290 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:56:04.837160 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:56:04.853705 systemd[1]: Starting ensure-sysext.service... Feb 13 15:56:04.860549 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:56:04.869638 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:56:04.883222 systemd[1]: Reloading requested from client PID 1407 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:56:04.883242 systemd[1]: Reloading... Feb 13 15:56:04.902750 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:56:04.903402 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:56:04.904168 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:56:04.904515 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Feb 13 15:56:04.904650 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Feb 13 15:56:04.907599 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:56:04.907740 systemd-tmpfiles[1408]: Skipping /boot Feb 13 15:56:04.918967 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:56:04.919137 systemd-tmpfiles[1408]: Skipping /boot Feb 13 15:56:04.926837 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Feb 13 15:56:04.984328 zram_generator::config[1436]: No configuration found. Feb 13 15:56:05.168575 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:56:05.195961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:56:05.249648 kernel: hv_vmbus: registering driver hv_balloon Feb 13 15:56:05.249743 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 15:56:05.249758 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 15:56:05.263458 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 15:56:05.263550 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 15:56:05.268794 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 15:56:05.275053 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:56:05.283923 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:56:05.292877 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:56:05.293029 systemd[1]: Reloading finished in 409 ms. Feb 13 15:56:05.311776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:56:05.338342 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1486) Feb 13 15:56:05.344111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:56:05.380965 systemd[1]: Finished ensure-sysext.service. Feb 13 15:56:05.398852 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Feb 13 15:56:05.408677 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:56:05.438663 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:56:05.448515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:56:05.457279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:56:05.485862 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:56:05.495717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:56:05.511722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:56:05.518318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:56:05.525519 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:56:05.531058 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:56:05.538451 augenrules[1618]: No rules Feb 13 15:56:05.540940 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:56:05.551137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:56:05.557416 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:56:05.566673 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:56:05.577448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:56:05.586803 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:56:05.598552 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:56:05.599025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:56:05.605863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:56:05.606036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:56:05.612689 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:56:05.612827 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:56:05.619310 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:56:05.620105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:56:05.627139 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:56:05.627291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:56:05.634133 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:56:05.662203 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:56:05.673702 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:56:05.687685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:56:05.707770 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:56:05.715484 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:56:05.721942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:56:05.722015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:56:05.724278 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:56:05.747526 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:56:05.754485 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:56:05.763962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:56:05.764137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:56:05.770398 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:56:05.773364 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:56:05.783063 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:56:05.796907 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:56:05.813532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:56:05.823495 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:56:05.824042 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:56:05.834556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:56:05.853531 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:56:05.870170 lvm[1657]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:56:05.890754 systemd-networkd[1623]: lo: Link UP Feb 13 15:56:05.891096 systemd-networkd[1623]: lo: Gained carrier Feb 13 15:56:05.894606 systemd-networkd[1623]: Enumeration completed Feb 13 15:56:05.894764 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:56:05.895180 systemd-networkd[1623]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:56:05.895246 systemd-networkd[1623]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:56:05.913581 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:56:05.923337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:56:05.930806 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:56:05.933169 systemd-resolved[1624]: Positive Trust Anchors: Feb 13 15:56:05.933541 systemd-resolved[1624]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:56:05.933640 systemd-resolved[1624]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:56:05.941421 systemd-resolved[1624]: Using system hostname 'ci-4152.2.1-a-d82c5cac77'. Feb 13 15:56:05.955316 kernel: mlx5_core 642c:00:02.0 enP25644s1: Link up Feb 13 15:56:05.982343 kernel: hv_netvsc 000d3afb-f613-000d-3afb-f613000d3afb eth0: Data path switched to VF: enP25644s1 Feb 13 15:56:05.983570 systemd-networkd[1623]: enP25644s1: Link UP Feb 13 15:56:05.983689 systemd-networkd[1623]: eth0: Link UP Feb 13 15:56:05.983692 systemd-networkd[1623]: eth0: Gained carrier Feb 13 15:56:05.983705 systemd-networkd[1623]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:56:05.983984 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:56:05.990305 systemd[1]: Reached target network.target - Network. Feb 13 15:56:05.995187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:56:06.001679 systemd-networkd[1623]: enP25644s1: Gained carrier Feb 13 15:56:06.002411 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:56:06.008163 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:56:06.015098 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:56:06.022076 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:56:06.028166 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:56:06.034877 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:56:06.041755 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:56:06.041790 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:56:06.046612 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:56:06.052428 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:56:06.060402 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:56:06.070256 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:56:06.077072 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:56:06.083015 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:56:06.088596 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:56:06.094017 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:56:06.094045 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:56:06.097370 systemd-networkd[1623]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:56:06.101466 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 15:56:06.111466 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:56:06.122012 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:56:06.130889 (chronyd)[1666]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 15:56:06.132531 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:56:06.139573 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:56:06.144455 chronyd[1675]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 15:56:06.148600 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:56:06.155262 jq[1673]: false Feb 13 15:56:06.157894 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:56:06.158228 chronyd[1675]: Timezone right/UTC failed leap second check, ignoring Feb 13 15:56:06.158464 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 15:56:06.158480 chronyd[1675]: Loaded seccomp filter (level 2) Feb 13 15:56:06.162544 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 15:56:06.174730 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 15:56:06.176817 extend-filesystems[1676]: Found loop4 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found loop5 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found loop6 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found loop7 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda1 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda2 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda3 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found usr Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda4 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda6 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda7 Feb 13 15:56:06.183581 extend-filesystems[1676]: Found sda9 Feb 13 15:56:06.183581 extend-filesystems[1676]: Checking size of /dev/sda9 Feb 13 15:56:06.343066 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1451) Feb 13 15:56:06.343098 kernel: hv_utils: KVP IC version 4.0 Feb 13 15:56:06.179548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:56:06.176865 KVP[1677]: KVP starting; pid is:1677 Feb 13 15:56:06.349377 coreos-metadata[1668]: Feb 13 15:56:06.337 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:56:06.349377 coreos-metadata[1668]: Feb 13 15:56:06.341 INFO Fetch successful Feb 13 15:56:06.349377 coreos-metadata[1668]: Feb 13 15:56:06.341 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 15:56:06.349377 coreos-metadata[1668]: Feb 13 15:56:06.347 INFO Fetch successful Feb 13 15:56:06.349377 coreos-metadata[1668]: Feb 13 15:56:06.348 INFO Fetching http://168.63.129.16/machine/2ef03ea5-78ee-428f-b577-b799c116cda5/e946dfeb%2Dc166%2D428f%2D91b5%2D8b929017d50f.%5Fci%2D4152.2.1%2Da%2Dd82c5cac77?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 15:56:06.349700 extend-filesystems[1676]: Old size kept for /dev/sda9 Feb 13 15:56:06.349700 extend-filesystems[1676]: Found sr0 Feb 13 15:56:06.204664 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:56:06.226705 dbus-daemon[1669]: [system] SELinux support is enabled Feb 13 15:56:06.390355 coreos-metadata[1668]: Feb 13 15:56:06.356 INFO Fetch successful Feb 13 15:56:06.390355 coreos-metadata[1668]: Feb 13 15:56:06.356 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:56:06.390355 coreos-metadata[1668]: Feb 13 15:56:06.372 INFO Fetch successful Feb 13 15:56:06.236910 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:56:06.289508 KVP[1677]: KVP LIC Version: 3.1 Feb 13 15:56:06.253025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:56:06.305658 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:56:06.317950 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:56:06.318543 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:56:06.339684 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:56:06.394498 jq[1723]: true Feb 13 15:56:06.359000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:56:06.372828 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:56:06.384416 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 15:56:06.401345 update_engine[1719]: I20250213 15:56:06.398903 1719 main.cc:92] Flatcar Update Engine starting Feb 13 15:56:06.402162 update_engine[1719]: I20250213 15:56:06.401913 1719 update_check_scheduler.cc:74] Next update check in 5m1s Feb 13 15:56:06.432764 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:56:06.435347 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:56:06.435736 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:56:06.437334 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:56:06.445731 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:56:06.446378 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:56:06.460059 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:56:06.463738 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:56:06.463935 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:56:06.464225 systemd-logind[1709]: New seat seat0. Feb 13 15:56:06.484437 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:56:06.504713 (ntainerd)[1756]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:56:06.510176 jq[1755]: true Feb 13 15:56:06.525981 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:56:06.549350 dbus-daemon[1669]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:56:06.550598 tar[1754]: linux-arm64/helm Feb 13 15:56:06.561056 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:56:06.571042 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:56:06.571267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:56:06.571416 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:56:06.582291 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:56:06.582437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:56:06.600371 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:56:06.656707 bash[1787]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:56:06.659775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:56:06.674547 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:56:06.739449 locksmithd[1783]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:56:06.750211 containerd[1756]: time="2025-02-13T15:56:06.750100240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:56:06.818281 containerd[1756]: time="2025-02-13T15:56:06.817849560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:56:06.823885 containerd[1756]: time="2025-02-13T15:56:06.823544360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:56:06.823885 containerd[1756]: time="2025-02-13T15:56:06.823588720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:56:06.823885 containerd[1756]: time="2025-02-13T15:56:06.823608320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:56:06.823885 containerd[1756]: time="2025-02-13T15:56:06.823772600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:56:06.823885 containerd[1756]: time="2025-02-13T15:56:06.823789000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:56:06.823885 containerd[1756]: time="2025-02-13T15:56:06.823861720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:56:06.823885 containerd[1756]: time="2025-02-13T15:56:06.823875440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:56:06.824100 containerd[1756]: time="2025-02-13T15:56:06.824037080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:56:06.824100 containerd[1756]: time="2025-02-13T15:56:06.824051080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:56:06.824100 containerd[1756]: time="2025-02-13T15:56:06.824063240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:56:06.824100 containerd[1756]: time="2025-02-13T15:56:06.824073120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:56:06.824176 containerd[1756]: time="2025-02-13T15:56:06.824146000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:56:06.827041 containerd[1756]: time="2025-02-13T15:56:06.826430240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:56:06.827041 containerd[1756]: time="2025-02-13T15:56:06.826582240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:56:06.827041 containerd[1756]: time="2025-02-13T15:56:06.826598440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:56:06.827041 containerd[1756]: time="2025-02-13T15:56:06.826702280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:56:06.827041 containerd[1756]: time="2025-02-13T15:56:06.826756960Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:56:06.845583 containerd[1756]: time="2025-02-13T15:56:06.845532520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:56:06.845583 containerd[1756]: time="2025-02-13T15:56:06.845601440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:56:06.845583 containerd[1756]: time="2025-02-13T15:56:06.845618800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:56:06.845583 containerd[1756]: time="2025-02-13T15:56:06.845635640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:56:06.845798 containerd[1756]: time="2025-02-13T15:56:06.845654040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:56:06.846184 containerd[1756]: time="2025-02-13T15:56:06.845823440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:56:06.846184 containerd[1756]: time="2025-02-13T15:56:06.846072160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:56:06.846184 containerd[1756]: time="2025-02-13T15:56:06.846169320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:56:06.846184 containerd[1756]: time="2025-02-13T15:56:06.846184800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:56:06.846273 containerd[1756]: time="2025-02-13T15:56:06.846200800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:56:06.846273 containerd[1756]: time="2025-02-13T15:56:06.846214680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846273 containerd[1756]: time="2025-02-13T15:56:06.846228600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846273 containerd[1756]: time="2025-02-13T15:56:06.846243280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846273 containerd[1756]: time="2025-02-13T15:56:06.846258120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846273 containerd[1756]: time="2025-02-13T15:56:06.846273200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846286240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846332720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846347600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846369960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846385480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846398320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846411680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846424400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846436440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846448080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846460400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846473560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846488520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846724 containerd[1756]: time="2025-02-13T15:56:06.846500720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846513360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846526760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846546880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846571320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846585320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846596600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846654960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846675000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846688680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846700880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846710880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846723040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846733400Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:56:06.846960 containerd[1756]: time="2025-02-13T15:56:06.846744600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:56:06.847196 containerd[1756]: time="2025-02-13T15:56:06.847014600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:56:06.847196 containerd[1756]: time="2025-02-13T15:56:06.847063400Z" level=info msg="Connect containerd service" Feb 13 15:56:06.847196 containerd[1756]: time="2025-02-13T15:56:06.847100200Z" level=info msg="using legacy CRI server" Feb 13 15:56:06.847196 containerd[1756]: time="2025-02-13T15:56:06.847107160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:56:06.847371 containerd[1756]: time="2025-02-13T15:56:06.847250040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.852211120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.852648400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.852921320Z" level=info msg="Start subscribing containerd event" Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.852982920Z" level=info msg="Start recovering state" Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.853053320Z" level=info msg="Start event monitor" Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.853064120Z" level=info msg="Start snapshots syncer" Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.853073760Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:56:06.853618 containerd[1756]: time="2025-02-13T15:56:06.853082520Z" level=info msg="Start streaming server" Feb 13 15:56:06.855338 containerd[1756]: time="2025-02-13T15:56:06.854347040Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:56:06.854545 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:56:06.864707 containerd[1756]: time="2025-02-13T15:56:06.864501280Z" level=info msg="containerd successfully booted in 0.116086s" Feb 13 15:56:06.900811 sshd_keygen[1712]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:56:06.921257 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:56:06.935182 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:56:06.950607 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:56:06.950858 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:56:06.962667 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:56:06.979766 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:56:06.992699 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:56:07.004684 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:56:07.010512 systemd-networkd[1623]: enP25644s1: Gained IPv6LL Feb 13 15:56:07.011736 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:56:07.057722 tar[1754]: linux-arm64/LICENSE Feb 13 15:56:07.057722 tar[1754]: linux-arm64/README.md Feb 13 15:56:07.068400 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:56:07.775555 systemd-networkd[1623]: eth0: Gained IPv6LL Feb 13 15:56:07.777741 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:56:07.785964 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:56:07.797523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:56:07.804386 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:56:07.811614 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 15:56:07.833489 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:56:07.852720 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 15:56:08.403500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:56:08.410803 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:56:08.413728 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:56:08.422477 systemd[1]: Startup finished in 701ms (kernel) + 8.466s (initrd) + 6.486s (userspace) = 15.654s. Feb 13 15:56:08.525349 waagent[1839]: 2025-02-13T15:56:08.525231Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 15:56:08.531640 waagent[1839]: 2025-02-13T15:56:08.531551Z INFO Daemon Daemon OS: flatcar 4152.2.1 Feb 13 15:56:08.537471 waagent[1839]: 2025-02-13T15:56:08.537388Z INFO Daemon Daemon Python: 3.11.10 Feb 13 15:56:08.544854 waagent[1839]: 2025-02-13T15:56:08.544363Z INFO Daemon Daemon Run daemon Feb 13 15:56:08.549087 waagent[1839]: 2025-02-13T15:56:08.548563Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.1' Feb 13 15:56:08.558515 waagent[1839]: 2025-02-13T15:56:08.558425Z INFO Daemon Daemon Using waagent for provisioning Feb 13 15:56:08.558933 login[1817]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:08.567694 waagent[1839]: 2025-02-13T15:56:08.564008Z INFO Daemon Daemon Activate resource disk Feb 13 15:56:08.569681 login[1818]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:08.575920 waagent[1839]: 2025-02-13T15:56:08.574386Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 15:56:08.593027 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:56:08.598425 waagent[1839]: 2025-02-13T15:56:08.597502Z INFO Daemon Daemon Found device: None Feb 13 15:56:08.603984 waagent[1839]: 2025-02-13T15:56:08.603893Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 15:56:08.604670 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:56:08.607522 systemd-logind[1709]: New session 2 of user core. Feb 13 15:56:08.612036 systemd-logind[1709]: New session 1 of user core. Feb 13 15:56:08.615409 waagent[1839]: 2025-02-13T15:56:08.613812Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 15:56:08.628335 waagent[1839]: 2025-02-13T15:56:08.626514Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:56:08.627225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:56:08.632616 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:56:08.635175 waagent[1839]: 2025-02-13T15:56:08.635078Z INFO Daemon Daemon Running default provisioning handler Feb 13 15:56:08.645842 (systemd)[1858]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:56:08.653987 waagent[1839]: 2025-02-13T15:56:08.653881Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 15:56:08.667793 waagent[1839]: 2025-02-13T15:56:08.667627Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 15:56:08.677736 waagent[1839]: 2025-02-13T15:56:08.677660Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 15:56:08.682702 waagent[1839]: 2025-02-13T15:56:08.682632Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 15:56:08.727324 waagent[1839]: 2025-02-13T15:56:08.726071Z INFO Daemon Daemon Successfully mounted dvd Feb 13 15:56:08.744920 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 15:56:08.748105 waagent[1839]: 2025-02-13T15:56:08.748009Z INFO Daemon Daemon Detect protocol endpoint Feb 13 15:56:08.753096 waagent[1839]: 2025-02-13T15:56:08.752980Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:56:08.759085 waagent[1839]: 2025-02-13T15:56:08.759014Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 15:56:08.765736 waagent[1839]: 2025-02-13T15:56:08.765667Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 15:56:08.771084 waagent[1839]: 2025-02-13T15:56:08.771017Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 15:56:08.777307 waagent[1839]: 2025-02-13T15:56:08.776022Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 15:56:08.802811 waagent[1839]: 2025-02-13T15:56:08.802752Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 15:56:08.810961 waagent[1839]: 2025-02-13T15:56:08.810919Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 15:56:08.818317 waagent[1839]: 2025-02-13T15:56:08.816713Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 15:56:08.824791 systemd[1858]: Queued start job for default target default.target. Feb 13 15:56:08.837141 systemd[1858]: Created slice app.slice - User Application Slice. Feb 13 15:56:08.837510 systemd[1858]: Reached target paths.target - Paths. Feb 13 15:56:08.837525 systemd[1858]: Reached target timers.target - Timers. Feb 13 15:56:08.841460 systemd[1858]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:56:08.850593 systemd[1858]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:56:08.851778 systemd[1858]: Reached target sockets.target - Sockets. Feb 13 15:56:08.851795 systemd[1858]: Reached target basic.target - Basic System. Feb 13 15:56:08.851840 systemd[1858]: Reached target default.target - Main User Target. Feb 13 15:56:08.851866 systemd[1858]: Startup finished in 186ms. Feb 13 15:56:08.852272 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:56:08.858725 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:56:08.860069 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:56:09.147097 kubelet[1846]: E0213 15:56:09.146996 1846 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:56:09.150911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:56:09.151040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:56:09.244332 waagent[1839]: 2025-02-13T15:56:09.243651Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 15:56:09.250188 waagent[1839]: 2025-02-13T15:56:09.250109Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 15:56:09.259491 waagent[1839]: 2025-02-13T15:56:09.259436Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:56:09.278890 waagent[1839]: 2025-02-13T15:56:09.278843Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 15:56:09.284702 waagent[1839]: 2025-02-13T15:56:09.284652Z INFO Daemon Feb 13 15:56:09.287633 waagent[1839]: 2025-02-13T15:56:09.287579Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 222e894e-ee00-4b4e-9c4d-66f7ac0e6913 eTag: 10680722688576574642 source: Fabric] Feb 13 15:56:09.298913 waagent[1839]: 2025-02-13T15:56:09.298862Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 15:56:09.305642 waagent[1839]: 2025-02-13T15:56:09.305585Z INFO Daemon Feb 13 15:56:09.308445 waagent[1839]: 2025-02-13T15:56:09.308389Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:56:09.319104 waagent[1839]: 2025-02-13T15:56:09.319065Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 15:56:09.412511 waagent[1839]: 2025-02-13T15:56:09.412368Z INFO Daemon Downloaded certificate {'thumbprint': 'A2D5717FE0E35E9A351338086CD8A26FBBFA4653', 'hasPrivateKey': False} Feb 13 15:56:09.422437 waagent[1839]: 2025-02-13T15:56:09.422340Z INFO Daemon Downloaded certificate {'thumbprint': '9FCB91E0F41E36EBE23AD405951AFD735E0DACBF', 'hasPrivateKey': True} Feb 13 15:56:09.431898 waagent[1839]: 2025-02-13T15:56:09.431840Z INFO Daemon Fetch goal state completed Feb 13 15:56:09.445663 waagent[1839]: 2025-02-13T15:56:09.445595Z INFO Daemon Daemon Starting provisioning Feb 13 15:56:09.451199 waagent[1839]: 2025-02-13T15:56:09.451129Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 15:56:09.455710 waagent[1839]: 2025-02-13T15:56:09.455649Z INFO Daemon Daemon Set hostname [ci-4152.2.1-a-d82c5cac77] Feb 13 15:56:09.471329 waagent[1839]: 2025-02-13T15:56:09.466375Z INFO Daemon Daemon Publish hostname [ci-4152.2.1-a-d82c5cac77] Feb 13 15:56:09.472533 waagent[1839]: 2025-02-13T15:56:09.472466Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 15:56:09.478508 waagent[1839]: 2025-02-13T15:56:09.478448Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 15:56:09.502684 systemd-networkd[1623]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:56:09.502701 systemd-networkd[1623]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:56:09.502756 systemd-networkd[1623]: eth0: DHCP lease lost Feb 13 15:56:09.504108 waagent[1839]: 2025-02-13T15:56:09.503971Z INFO Daemon Daemon Create user account if not exists Feb 13 15:56:09.509937 waagent[1839]: 2025-02-13T15:56:09.509842Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 15:56:09.515722 systemd-networkd[1623]: eth0: DHCPv6 lease lost Feb 13 15:56:09.516063 waagent[1839]: 2025-02-13T15:56:09.515975Z INFO Daemon Daemon Configure sudoer Feb 13 15:56:09.521024 waagent[1839]: 2025-02-13T15:56:09.520945Z INFO Daemon Daemon Configure sshd Feb 13 15:56:09.525576 waagent[1839]: 2025-02-13T15:56:09.525495Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 15:56:09.538252 waagent[1839]: 2025-02-13T15:56:09.538179Z INFO Daemon Daemon Deploy ssh public key. Feb 13 15:56:09.549359 systemd-networkd[1623]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:56:10.669067 waagent[1839]: 2025-02-13T15:56:10.664542Z INFO Daemon Daemon Provisioning complete Feb 13 15:56:10.685521 waagent[1839]: 2025-02-13T15:56:10.685471Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 15:56:10.691766 waagent[1839]: 2025-02-13T15:56:10.691702Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 15:56:10.702037 waagent[1839]: 2025-02-13T15:56:10.701979Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 15:56:10.838905 waagent[1915]: 2025-02-13T15:56:10.838381Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 15:56:10.838905 waagent[1915]: 2025-02-13T15:56:10.838533Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.1 Feb 13 15:56:10.838905 waagent[1915]: 2025-02-13T15:56:10.838583Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 15:56:10.848819 waagent[1915]: 2025-02-13T15:56:10.848738Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 15:56:10.849122 waagent[1915]: 2025-02-13T15:56:10.849084Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:56:10.849265 waagent[1915]: 2025-02-13T15:56:10.849233Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:56:10.859006 waagent[1915]: 2025-02-13T15:56:10.858919Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:56:10.865389 waagent[1915]: 2025-02-13T15:56:10.865339Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 15:56:10.866327 waagent[1915]: 2025-02-13T15:56:10.866047Z INFO ExtHandler Feb 13 15:56:10.866327 waagent[1915]: 2025-02-13T15:56:10.866124Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 566ee743-4ebc-4cb5-a7c3-652216e50cff eTag: 10680722688576574642 source: Fabric] Feb 13 15:56:10.866595 waagent[1915]: 2025-02-13T15:56:10.866555Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 15:56:10.867242 waagent[1915]: 2025-02-13T15:56:10.867200Z INFO ExtHandler Feb 13 15:56:10.868080 waagent[1915]: 2025-02-13T15:56:10.867386Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:56:10.873271 waagent[1915]: 2025-02-13T15:56:10.871645Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 15:56:10.949586 waagent[1915]: 2025-02-13T15:56:10.949454Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A2D5717FE0E35E9A351338086CD8A26FBBFA4653', 'hasPrivateKey': False} Feb 13 15:56:10.950200 waagent[1915]: 2025-02-13T15:56:10.950151Z INFO ExtHandler Downloaded certificate {'thumbprint': '9FCB91E0F41E36EBE23AD405951AFD735E0DACBF', 'hasPrivateKey': True} Feb 13 15:56:10.950810 waagent[1915]: 2025-02-13T15:56:10.950766Z INFO ExtHandler Fetch goal state completed Feb 13 15:56:10.969625 waagent[1915]: 2025-02-13T15:56:10.969560Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1915 Feb 13 15:56:10.969931 waagent[1915]: 2025-02-13T15:56:10.969896Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 15:56:10.971732 waagent[1915]: 2025-02-13T15:56:10.971685Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 15:56:10.972237 waagent[1915]: 2025-02-13T15:56:10.972198Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 15:56:10.982879 waagent[1915]: 2025-02-13T15:56:10.982841Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 15:56:10.983195 waagent[1915]: 2025-02-13T15:56:10.983158Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 15:56:10.989386 waagent[1915]: 2025-02-13T15:56:10.989347Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 15:56:10.996489 systemd[1]: Reloading requested from client PID 1930 ('systemctl') (unit waagent.service)... Feb 13 15:56:10.996506 systemd[1]: Reloading... Feb 13 15:56:11.068741 zram_generator::config[1964]: No configuration found. Feb 13 15:56:11.183574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:56:11.264416 systemd[1]: Reloading finished in 267 ms. Feb 13 15:56:11.289228 waagent[1915]: 2025-02-13T15:56:11.285556Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 15:56:11.291758 systemd[1]: Reloading requested from client PID 2018 ('systemctl') (unit waagent.service)... Feb 13 15:56:11.292017 systemd[1]: Reloading... Feb 13 15:56:11.360490 zram_generator::config[2048]: No configuration found. Feb 13 15:56:11.474764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:56:11.555641 systemd[1]: Reloading finished in 262 ms. Feb 13 15:56:11.581998 waagent[1915]: 2025-02-13T15:56:11.581901Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 15:56:11.582126 waagent[1915]: 2025-02-13T15:56:11.582087Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 15:56:11.665105 waagent[1915]: 2025-02-13T15:56:11.665019Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 15:56:11.665729 waagent[1915]: 2025-02-13T15:56:11.665681Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 15:56:11.666569 waagent[1915]: 2025-02-13T15:56:11.666479Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 15:56:11.667096 waagent[1915]: 2025-02-13T15:56:11.666938Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 15:56:11.667380 waagent[1915]: 2025-02-13T15:56:11.667329Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:56:11.668282 waagent[1915]: 2025-02-13T15:56:11.667470Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:56:11.668282 waagent[1915]: 2025-02-13T15:56:11.667555Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:56:11.668282 waagent[1915]: 2025-02-13T15:56:11.667692Z INFO EnvHandler ExtHandler Configure routes Feb 13 15:56:11.668282 waagent[1915]: 2025-02-13T15:56:11.667751Z INFO EnvHandler ExtHandler Gateway:None Feb 13 15:56:11.668282 waagent[1915]: 2025-02-13T15:56:11.667792Z INFO EnvHandler ExtHandler Routes:None Feb 13 15:56:11.668619 waagent[1915]: 2025-02-13T15:56:11.668554Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 15:56:11.668811 waagent[1915]: 2025-02-13T15:56:11.668772Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:56:11.669096 waagent[1915]: 2025-02-13T15:56:11.669054Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 15:56:11.669417 waagent[1915]: 2025-02-13T15:56:11.669371Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 15:56:11.669417 waagent[1915]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 15:56:11.669417 waagent[1915]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 15:56:11.669417 waagent[1915]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 15:56:11.669417 waagent[1915]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:56:11.669417 waagent[1915]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:56:11.669417 waagent[1915]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:56:11.670126 waagent[1915]: 2025-02-13T15:56:11.670057Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 15:56:11.670480 waagent[1915]: 2025-02-13T15:56:11.670440Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 15:56:11.671460 waagent[1915]: 2025-02-13T15:56:11.670386Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 15:56:11.671460 waagent[1915]: 2025-02-13T15:56:11.671033Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 15:56:11.679281 waagent[1915]: 2025-02-13T15:56:11.678486Z INFO ExtHandler ExtHandler Feb 13 15:56:11.679281 waagent[1915]: 2025-02-13T15:56:11.678608Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4d17bb68-fd6b-42b6-a19f-073ebc1d8851 correlation f248b3b0-2e2c-44f2-ad1e-e0498abf12ac created: 2025-02-13T15:55:30.752510Z] Feb 13 15:56:11.679281 waagent[1915]: 2025-02-13T15:56:11.678983Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 15:56:11.679281 waagent[1915]: 2025-02-13T15:56:11.679590Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 15:56:11.694524 waagent[1915]: 2025-02-13T15:56:11.694447Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 15:56:11.694524 waagent[1915]: Executing ['ip', '-a', '-o', 'link']: Feb 13 15:56:11.694524 waagent[1915]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 15:56:11.694524 waagent[1915]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:f6:13 brd ff:ff:ff:ff:ff:ff Feb 13 15:56:11.694524 waagent[1915]: 3: enP25644s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:f6:13 brd ff:ff:ff:ff:ff:ff\ altname enP25644p0s2 Feb 13 15:56:11.694524 waagent[1915]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 15:56:11.694524 waagent[1915]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 15:56:11.694524 waagent[1915]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 15:56:11.694524 waagent[1915]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 15:56:11.694524 waagent[1915]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 15:56:11.694524 waagent[1915]: 2: eth0 inet6 fe80::20d:3aff:fefb:f613/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:56:11.694524 waagent[1915]: 3: enP25644s1 inet6 fe80::20d:3aff:fefb:f613/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:56:11.979488 waagent[1915]: 2025-02-13T15:56:11.974535Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 15:56:11.979488 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:56:11.979488 waagent[1915]: pkts bytes target prot opt in out source destination Feb 13 15:56:11.979488 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:56:11.979488 waagent[1915]: pkts bytes target prot opt in out source destination Feb 13 15:56:11.979488 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:56:11.979488 waagent[1915]: pkts bytes target prot opt in out source destination Feb 13 15:56:11.979488 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:56:11.979488 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:56:11.979488 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:56:11.985027 waagent[1915]: 2025-02-13T15:56:11.984905Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3D601169-F7B8-4B8B-BD87-7DB0248D72C8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 15:56:11.988922 waagent[1915]: 2025-02-13T15:56:11.988832Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 15:56:11.988922 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:56:11.988922 waagent[1915]: pkts bytes target prot opt in out source destination Feb 13 15:56:11.988922 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:56:11.988922 waagent[1915]: pkts bytes target prot opt in out source destination Feb 13 15:56:11.988922 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:56:11.988922 waagent[1915]: pkts bytes target prot opt in out source destination Feb 13 15:56:11.988922 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:56:11.988922 waagent[1915]: 3 363 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:56:11.988922 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:56:11.989202 waagent[1915]: 2025-02-13T15:56:11.989165Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 15:56:19.248055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:56:19.257500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:56:19.364069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:56:19.368993 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:56:19.442840 kubelet[2145]: E0213 15:56:19.442790 2145 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:56:19.445959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:56:19.446084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:56:29.498243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:56:29.505595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:56:29.600811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:56:29.605378 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:56:29.698533 kubelet[2161]: E0213 15:56:29.698422 2161 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:56:29.701200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:56:29.701385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:56:29.952741 chronyd[1675]: Selected source PHC0 Feb 13 15:56:37.841467 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:56:37.849560 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:60836.service - OpenSSH per-connection server daemon (10.200.16.10:60836). Feb 13 15:56:38.305877 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 60836 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:56:38.307287 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:38.312366 systemd-logind[1709]: New session 3 of user core. Feb 13 15:56:38.318501 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:56:38.691843 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:60840.service - OpenSSH per-connection server daemon (10.200.16.10:60840). Feb 13 15:56:39.126498 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 60840 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:56:39.127905 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:39.133119 systemd-logind[1709]: New session 4 of user core. Feb 13 15:56:39.138486 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:56:39.440457 sshd[2177]: Connection closed by 10.200.16.10 port 60840 Feb 13 15:56:39.441016 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:39.444809 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:60840.service: Deactivated successfully. Feb 13 15:56:39.447746 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:56:39.449444 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:56:39.451017 systemd-logind[1709]: Removed session 4. Feb 13 15:56:39.531567 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:52376.service - OpenSSH per-connection server daemon (10.200.16.10:52376). Feb 13 15:56:39.748105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:56:39.756589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:56:39.977582 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 52376 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:56:39.978972 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:39.983180 systemd-logind[1709]: New session 5 of user core. Feb 13 15:56:39.989468 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:56:40.060198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:56:40.071594 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:56:40.118743 kubelet[2193]: E0213 15:56:40.118679 2193 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:56:40.121440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:56:40.121568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:56:40.298706 sshd[2187]: Connection closed by 10.200.16.10 port 52376 Feb 13 15:56:40.298250 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:40.301116 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:52376.service: Deactivated successfully. Feb 13 15:56:40.302841 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:56:40.304455 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:56:40.305629 systemd-logind[1709]: Removed session 5. Feb 13 15:56:40.385501 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:52384.service - OpenSSH per-connection server daemon (10.200.16.10:52384). Feb 13 15:56:40.870313 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 52384 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:56:40.871655 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:40.876487 systemd-logind[1709]: New session 6 of user core. Feb 13 15:56:40.882488 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:56:41.217464 sshd[2207]: Connection closed by 10.200.16.10 port 52384 Feb 13 15:56:41.218096 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:41.221508 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:52384.service: Deactivated successfully. Feb 13 15:56:41.223015 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:56:41.224575 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:56:41.225841 systemd-logind[1709]: Removed session 6. Feb 13 15:56:41.298968 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:52400.service - OpenSSH per-connection server daemon (10.200.16.10:52400). Feb 13 15:56:41.770922 sshd[2212]: Accepted publickey for core from 10.200.16.10 port 52400 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:56:41.772273 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:41.777228 systemd-logind[1709]: New session 7 of user core. Feb 13 15:56:41.782496 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:56:42.065954 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:56:42.066271 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:56:42.085009 sudo[2215]: pam_unix(sudo:session): session closed for user root Feb 13 15:56:42.159489 sshd[2214]: Connection closed by 10.200.16.10 port 52400 Feb 13 15:56:42.160217 sshd-session[2212]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:42.164092 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:52400.service: Deactivated successfully. Feb 13 15:56:42.166171 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:56:42.166901 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:56:42.168245 systemd-logind[1709]: Removed session 7. Feb 13 15:56:42.238357 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:52410.service - OpenSSH per-connection server daemon (10.200.16.10:52410). Feb 13 15:56:42.675996 sshd[2220]: Accepted publickey for core from 10.200.16.10 port 52410 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:56:42.677436 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:42.681246 systemd-logind[1709]: New session 8 of user core. Feb 13 15:56:42.689471 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:56:42.923944 sudo[2224]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:56:42.924223 sudo[2224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:56:42.927509 sudo[2224]: pam_unix(sudo:session): session closed for user root Feb 13 15:56:42.932967 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:56:42.933656 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:56:42.954606 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:56:42.978263 augenrules[2246]: No rules Feb 13 15:56:42.979933 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:56:42.980414 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:56:42.982675 sudo[2223]: pam_unix(sudo:session): session closed for user root Feb 13 15:56:43.050960 sshd[2222]: Connection closed by 10.200.16.10 port 52410 Feb 13 15:56:43.051539 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:43.055215 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:52410.service: Deactivated successfully. Feb 13 15:56:43.057058 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:56:43.058114 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:56:43.059055 systemd-logind[1709]: Removed session 8. Feb 13 15:56:43.135906 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:52424.service - OpenSSH per-connection server daemon (10.200.16.10:52424). Feb 13 15:56:43.607034 sshd[2254]: Accepted publickey for core from 10.200.16.10 port 52424 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:56:43.608389 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:43.613314 systemd-logind[1709]: New session 9 of user core. Feb 13 15:56:43.617525 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:56:43.872216 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:56:43.872543 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:56:44.241654 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:56:44.241848 (dockerd)[2275]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:56:44.491019 dockerd[2275]: time="2025-02-13T15:56:44.490964926Z" level=info msg="Starting up" Feb 13 15:56:44.692765 dockerd[2275]: time="2025-02-13T15:56:44.692705381Z" level=info msg="Loading containers: start." Feb 13 15:56:44.838436 kernel: Initializing XFRM netlink socket Feb 13 15:56:44.918010 systemd-networkd[1623]: docker0: Link UP Feb 13 15:56:44.955827 dockerd[2275]: time="2025-02-13T15:56:44.955696590Z" level=info msg="Loading containers: done." Feb 13 15:56:44.973289 dockerd[2275]: time="2025-02-13T15:56:44.973232423Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:56:44.973477 dockerd[2275]: time="2025-02-13T15:56:44.973366743Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:56:44.973511 dockerd[2275]: time="2025-02-13T15:56:44.973480463Z" level=info msg="Daemon has completed initialization" Feb 13 15:56:45.028413 dockerd[2275]: time="2025-02-13T15:56:45.028281525Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:56:45.028685 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:56:46.357286 containerd[1756]: time="2025-02-13T15:56:46.357212273Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:56:47.259153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767632849.mount: Deactivated successfully. Feb 13 15:56:49.383250 containerd[1756]: time="2025-02-13T15:56:49.383186681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:49.386137 containerd[1756]: time="2025-02-13T15:56:49.386071807Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205861" Feb 13 15:56:49.390334 containerd[1756]: time="2025-02-13T15:56:49.390273176Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:49.395920 containerd[1756]: time="2025-02-13T15:56:49.395862428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:49.396920 containerd[1756]: time="2025-02-13T15:56:49.396875630Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 3.039620557s" Feb 13 15:56:49.396920 containerd[1756]: time="2025-02-13T15:56:49.396919591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:56:49.419073 containerd[1756]: time="2025-02-13T15:56:49.418962718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:56:50.247989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:56:50.257489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:56:50.346359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:56:50.359672 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:56:50.408222 kubelet[2529]: E0213 15:56:50.408151 2529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:56:50.411858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:56:50.412110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:56:51.459349 containerd[1756]: time="2025-02-13T15:56:51.458676745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:51.463485 containerd[1756]: time="2025-02-13T15:56:51.463162635Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383091" Feb 13 15:56:51.469263 containerd[1756]: time="2025-02-13T15:56:51.469211968Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:51.476322 containerd[1756]: time="2025-02-13T15:56:51.476237143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:51.477067 containerd[1756]: time="2025-02-13T15:56:51.476941985Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 2.057942467s" Feb 13 15:56:51.477067 containerd[1756]: time="2025-02-13T15:56:51.476975945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:56:51.499632 containerd[1756]: time="2025-02-13T15:56:51.499587634Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:56:51.775687 update_engine[1719]: I20250213 15:56:51.775604 1719 update_attempter.cc:509] Updating boot flags... Feb 13 15:56:52.368363 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2562) Feb 13 15:56:52.501738 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2566) Feb 13 15:56:53.390932 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 15:56:53.759334 containerd[1756]: time="2025-02-13T15:56:53.759061098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:53.762826 containerd[1756]: time="2025-02-13T15:56:53.762588945Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766980" Feb 13 15:56:53.767060 containerd[1756]: time="2025-02-13T15:56:53.766997995Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:53.773807 containerd[1756]: time="2025-02-13T15:56:53.773732890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:53.774894 containerd[1756]: time="2025-02-13T15:56:53.774764292Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 2.275129298s" Feb 13 15:56:53.774894 containerd[1756]: time="2025-02-13T15:56:53.774798612Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:56:53.795328 containerd[1756]: time="2025-02-13T15:56:53.795177776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:56:54.817125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285157304.mount: Deactivated successfully. Feb 13 15:56:55.342405 containerd[1756]: time="2025-02-13T15:56:55.342350927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:55.346310 containerd[1756]: time="2025-02-13T15:56:55.346240097Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273375" Feb 13 15:56:55.350333 containerd[1756]: time="2025-02-13T15:56:55.350261626Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:55.356290 containerd[1756]: time="2025-02-13T15:56:55.356215360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:55.357096 containerd[1756]: time="2025-02-13T15:56:55.356973202Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.561758226s" Feb 13 15:56:55.357096 containerd[1756]: time="2025-02-13T15:56:55.357005242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:56:55.377287 containerd[1756]: time="2025-02-13T15:56:55.377222809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:56:56.124588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698553874.mount: Deactivated successfully. Feb 13 15:56:57.640253 containerd[1756]: time="2025-02-13T15:56:57.640195991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:57.643828 containerd[1756]: time="2025-02-13T15:56:57.643778479Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:56:57.647543 containerd[1756]: time="2025-02-13T15:56:57.647492768Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:57.653022 containerd[1756]: time="2025-02-13T15:56:57.652949380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:57.654153 containerd[1756]: time="2025-02-13T15:56:57.654016743Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.276558653s" Feb 13 15:56:57.654153 containerd[1756]: time="2025-02-13T15:56:57.654053743Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:56:57.676122 containerd[1756]: time="2025-02-13T15:56:57.676072675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:56:58.286736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418938477.mount: Deactivated successfully. Feb 13 15:56:58.313674 containerd[1756]: time="2025-02-13T15:56:58.313619808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:58.316467 containerd[1756]: time="2025-02-13T15:56:58.316415135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 15:56:58.320815 containerd[1756]: time="2025-02-13T15:56:58.320765905Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:58.326375 containerd[1756]: time="2025-02-13T15:56:58.326286358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:58.327558 containerd[1756]: time="2025-02-13T15:56:58.327064480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 650.951685ms" Feb 13 15:56:58.327558 containerd[1756]: time="2025-02-13T15:56:58.327101960Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:56:58.348240 containerd[1756]: time="2025-02-13T15:56:58.348190369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:56:59.080816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650647441.mount: Deactivated successfully. Feb 13 15:57:00.498013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:57:00.506494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:00.617055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:00.629591 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:57:00.669420 kubelet[2788]: E0213 15:57:00.669349 2788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:57:00.671723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:57:00.671847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:57:02.255894 containerd[1756]: time="2025-02-13T15:57:02.255567505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:02.318332 containerd[1756]: time="2025-02-13T15:57:02.318223470Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Feb 13 15:57:02.323908 containerd[1756]: time="2025-02-13T15:57:02.323850041Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:02.329506 containerd[1756]: time="2025-02-13T15:57:02.329448532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:02.331070 containerd[1756]: time="2025-02-13T15:57:02.330615934Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.982382685s" Feb 13 15:57:02.331070 containerd[1756]: time="2025-02-13T15:57:02.330653854Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:57:07.386352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:07.401559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:07.419356 systemd[1]: Reloading requested from client PID 2867 ('systemctl') (unit session-9.scope)... Feb 13 15:57:07.419685 systemd[1]: Reloading... Feb 13 15:57:07.529378 zram_generator::config[2910]: No configuration found. Feb 13 15:57:07.637417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:57:07.717572 systemd[1]: Reloading finished in 297 ms. Feb 13 15:57:08.712436 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:57:08.712539 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:57:08.712800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:08.719802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:11.822051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:11.827052 (kubelet)[2971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:57:11.869497 kubelet[2971]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:57:11.869844 kubelet[2971]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:57:11.869892 kubelet[2971]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:57:11.870033 kubelet[2971]: I0213 15:57:11.869995 2971 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:57:12.443915 kubelet[2971]: I0213 15:57:12.443882 2971 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:57:13.513354 kubelet[2971]: I0213 15:57:12.444027 2971 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:57:13.513354 kubelet[2971]: I0213 15:57:12.444235 2971 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:57:13.513354 kubelet[2971]: E0213 15:57:12.458287 2971 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:13.513354 kubelet[2971]: I0213 15:57:12.458392 2971 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:57:13.513354 kubelet[2971]: I0213 15:57:12.466802 2971 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:57:13.513354 kubelet[2971]: I0213 15:57:12.467021 2971 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:57:13.513741 kubelet[2971]: I0213 15:57:12.467283 2971 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:57:13.513741 kubelet[2971]: I0213 15:57:12.467337 2971 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:57:13.513741 kubelet[2971]: I0213 15:57:12.467346 2971 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:57:13.518021 kubelet[2971]: I0213 15:57:13.517975 2971 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:57:13.520512 kubelet[2971]: I0213 15:57:13.520290 2971 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:57:13.520512 kubelet[2971]: I0213 15:57:13.520337 2971 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:57:13.520512 kubelet[2971]: I0213 15:57:13.520365 2971 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:57:13.520512 kubelet[2971]: I0213 15:57:13.520377 2971 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:57:13.523310 kubelet[2971]: W0213 15:57:13.522617 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-d82c5cac77&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:13.523310 kubelet[2971]: E0213 15:57:13.522670 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-d82c5cac77&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:13.523310 kubelet[2971]: W0213 15:57:13.522978 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:13.523310 kubelet[2971]: E0213 15:57:13.523009 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:13.523625 kubelet[2971]: I0213 15:57:13.523609 2971 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:57:13.524031 kubelet[2971]: I0213 15:57:13.524016 2971 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:57:13.524158 kubelet[2971]: W0213 15:57:13.524147 2971 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:57:13.525248 kubelet[2971]: I0213 15:57:13.525100 2971 server.go:1256] "Started kubelet" Feb 13 15:57:13.527178 kubelet[2971]: I0213 15:57:13.527156 2971 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:57:13.528244 kubelet[2971]: I0213 15:57:13.527707 2971 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:57:13.528244 kubelet[2971]: I0213 15:57:13.527918 2971 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:57:13.528244 kubelet[2971]: I0213 15:57:13.528002 2971 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:57:13.530103 kubelet[2971]: E0213 15:57:13.530077 2971 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.1-a-d82c5cac77.1823cfaf7178228d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-d82c5cac77,UID:ci-4152.2.1-a-d82c5cac77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-d82c5cac77,},FirstTimestamp:2025-02-13 15:57:13.525068429 +0000 UTC m=+1.694264391,LastTimestamp:2025-02-13 15:57:13.525068429 +0000 UTC m=+1.694264391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-d82c5cac77,}" Feb 13 15:57:13.531044 kubelet[2971]: I0213 15:57:13.530627 2971 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:57:13.534887 kubelet[2971]: E0213 15:57:13.534245 2971 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.1-a-d82c5cac77\" not found" Feb 13 15:57:13.534887 kubelet[2971]: I0213 15:57:13.534283 2971 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:57:13.534887 kubelet[2971]: I0213 15:57:13.534400 2971 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:57:13.534887 kubelet[2971]: I0213 15:57:13.534455 2971 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:57:13.534887 kubelet[2971]: W0213 15:57:13.534803 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:13.534887 kubelet[2971]: E0213 15:57:13.534841 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:13.535333 kubelet[2971]: E0213 15:57:13.535317 2971 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:57:13.535956 kubelet[2971]: E0213 15:57:13.535933 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-d82c5cac77?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" Feb 13 15:57:13.536646 kubelet[2971]: I0213 15:57:13.536626 2971 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:57:13.536815 kubelet[2971]: I0213 15:57:13.536798 2971 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:57:13.538102 kubelet[2971]: I0213 15:57:13.538084 2971 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:57:13.736791 kubelet[2971]: E0213 15:57:13.736751 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-d82c5cac77?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" Feb 13 15:57:13.855585 kubelet[2971]: I0213 15:57:13.855481 2971 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:13.856064 kubelet[2971]: E0213 15:57:13.856040 2971 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:13.856959 kubelet[2971]: I0213 15:57:13.856937 2971 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:57:13.856959 kubelet[2971]: I0213 15:57:13.856957 2971 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:57:13.857045 kubelet[2971]: I0213 15:57:13.856977 2971 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:57:14.057721 kubelet[2971]: I0213 15:57:14.057691 2971 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.058101 kubelet[2971]: E0213 15:57:14.058075 2971 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.137777 kubelet[2971]: E0213 15:57:14.137686 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-d82c5cac77?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" Feb 13 15:57:14.164442 kubelet[2971]: I0213 15:57:14.164403 2971 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:57:14.165506 kubelet[2971]: I0213 15:57:14.165484 2971 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:57:14.165506 kubelet[2971]: I0213 15:57:14.165538 2971 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:57:14.165506 kubelet[2971]: I0213 15:57:14.165557 2971 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:57:14.165506 kubelet[2971]: E0213 15:57:14.165610 2971 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:57:14.166626 kubelet[2971]: W0213 15:57:14.166584 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.167661 kubelet[2971]: E0213 15:57:14.167643 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.265976 kubelet[2971]: E0213 15:57:14.265942 2971 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:57:14.267055 kubelet[2971]: I0213 15:57:14.267028 2971 policy_none.go:49] "None policy: Start" Feb 13 15:57:14.268711 kubelet[2971]: I0213 15:57:14.268110 2971 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:57:14.268711 kubelet[2971]: I0213 15:57:14.268158 2971 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:57:14.321090 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:57:14.333348 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:57:14.345526 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:57:14.347407 kubelet[2971]: I0213 15:57:14.346986 2971 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:57:14.347407 kubelet[2971]: I0213 15:57:14.347274 2971 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:57:14.348978 kubelet[2971]: E0213 15:57:14.348946 2971 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.1-a-d82c5cac77\" not found" Feb 13 15:57:14.460954 kubelet[2971]: I0213 15:57:14.460836 2971 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.461533 kubelet[2971]: E0213 15:57:14.461508 2971 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.466700 kubelet[2971]: I0213 15:57:14.466671 2971 topology_manager.go:215] "Topology Admit Handler" podUID="12b95efe7a97503fb735322c4e5bd63a" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.468280 kubelet[2971]: I0213 15:57:14.468235 2971 topology_manager.go:215] "Topology Admit Handler" podUID="70f84c601f4ed093b81e6a7951337b88" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.469009 kubelet[2971]: E0213 15:57:14.468969 2971 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.470450 kubelet[2971]: I0213 15:57:14.470189 2971 topology_manager.go:215] "Topology Admit Handler" podUID="7bd86d6e594091c921f856df971f384a" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.474841 kubelet[2971]: W0213 15:57:14.474792 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.476164 kubelet[2971]: E0213 15:57:14.475223 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.478011 systemd[1]: Created slice kubepods-burstable-pod12b95efe7a97503fb735322c4e5bd63a.slice - libcontainer container kubepods-burstable-pod12b95efe7a97503fb735322c4e5bd63a.slice. Feb 13 15:57:14.491417 systemd[1]: Created slice kubepods-burstable-pod70f84c601f4ed093b81e6a7951337b88.slice - libcontainer container kubepods-burstable-pod70f84c601f4ed093b81e6a7951337b88.slice. Feb 13 15:57:14.505435 systemd[1]: Created slice kubepods-burstable-pod7bd86d6e594091c921f856df971f384a.slice - libcontainer container kubepods-burstable-pod7bd86d6e594091c921f856df971f384a.slice. Feb 13 15:57:14.540346 kubelet[2971]: I0213 15:57:14.540035 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12b95efe7a97503fb735322c4e5bd63a-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-d82c5cac77\" (UID: \"12b95efe7a97503fb735322c4e5bd63a\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540346 kubelet[2971]: I0213 15:57:14.540079 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12b95efe7a97503fb735322c4e5bd63a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-d82c5cac77\" (UID: \"12b95efe7a97503fb735322c4e5bd63a\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540346 kubelet[2971]: I0213 15:57:14.540101 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540346 kubelet[2971]: I0213 15:57:14.540120 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540346 kubelet[2971]: I0213 15:57:14.540149 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540795 kubelet[2971]: I0213 15:57:14.540170 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7bd86d6e594091c921f856df971f384a-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-d82c5cac77\" (UID: \"7bd86d6e594091c921f856df971f384a\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540795 kubelet[2971]: I0213 15:57:14.540188 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12b95efe7a97503fb735322c4e5bd63a-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-d82c5cac77\" (UID: \"12b95efe7a97503fb735322c4e5bd63a\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540795 kubelet[2971]: I0213 15:57:14.540208 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.540795 kubelet[2971]: I0213 15:57:14.540229 2971 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:14.791187 containerd[1756]: time="2025-02-13T15:57:14.791078533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-d82c5cac77,Uid:12b95efe7a97503fb735322c4e5bd63a,Namespace:kube-system,Attempt:0,}" Feb 13 15:57:14.803938 containerd[1756]: time="2025-02-13T15:57:14.803729758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-d82c5cac77,Uid:70f84c601f4ed093b81e6a7951337b88,Namespace:kube-system,Attempt:0,}" Feb 13 15:57:14.808771 containerd[1756]: time="2025-02-13T15:57:14.808732848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-d82c5cac77,Uid:7bd86d6e594091c921f856df971f384a,Namespace:kube-system,Attempt:0,}" Feb 13 15:57:14.875603 kubelet[2971]: W0213 15:57:14.875512 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-d82c5cac77&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.875603 kubelet[2971]: E0213 15:57:14.875582 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-d82c5cac77&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.907124 kubelet[2971]: W0213 15:57:14.907080 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.907204 kubelet[2971]: E0213 15:57:14.907131 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:14.938615 kubelet[2971]: E0213 15:57:14.938586 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-d82c5cac77?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" Feb 13 15:57:15.263877 kubelet[2971]: I0213 15:57:15.263849 2971 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:15.264240 kubelet[2971]: E0213 15:57:15.264193 2971 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:15.675093 kubelet[2971]: W0213 15:57:15.674937 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:15.675093 kubelet[2971]: E0213 15:57:15.675004 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:16.539789 kubelet[2971]: E0213 15:57:16.539750 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-d82c5cac77?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="3.2s" Feb 13 15:57:16.866771 kubelet[2971]: I0213 15:57:16.866657 2971 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:16.867235 kubelet[2971]: E0213 15:57:16.867209 2971 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:16.941193 kubelet[2971]: W0213 15:57:16.941162 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:16.941267 kubelet[2971]: E0213 15:57:16.941201 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:17.117262 kubelet[2971]: W0213 15:57:17.117179 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-d82c5cac77&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:17.117262 kubelet[2971]: E0213 15:57:17.117225 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-d82c5cac77&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:17.441046 kubelet[2971]: W0213 15:57:17.440942 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:17.441046 kubelet[2971]: E0213 15:57:17.440985 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:17.860017 kubelet[2971]: W0213 15:57:17.859979 2971 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:17.860017 kubelet[2971]: E0213 15:57:17.860024 2971 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:18.564996 kubelet[2971]: E0213 15:57:18.564962 2971 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.24:6443: connect: connection refused Feb 13 15:57:19.741035 kubelet[2971]: E0213 15:57:19.741001 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-d82c5cac77?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="6.4s" Feb 13 15:57:20.069306 kubelet[2971]: I0213 15:57:20.069258 2971 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:20.069618 kubelet[2971]: E0213 15:57:20.069594 2971 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:20.844368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085992192.mount: Deactivated successfully. Feb 13 15:57:20.878210 containerd[1756]: time="2025-02-13T15:57:20.877388641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:57:20.893040 containerd[1756]: time="2025-02-13T15:57:20.892975952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:57:20.896225 containerd[1756]: time="2025-02-13T15:57:20.895471397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:57:20.899683 containerd[1756]: time="2025-02-13T15:57:20.899639205Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:57:20.907644 containerd[1756]: time="2025-02-13T15:57:20.907547621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:57:20.912977 containerd[1756]: time="2025-02-13T15:57:20.912928111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:57:20.916499 containerd[1756]: time="2025-02-13T15:57:20.916447758Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:57:20.922512 containerd[1756]: time="2025-02-13T15:57:20.922466050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:57:20.923489 containerd[1756]: time="2025-02-13T15:57:20.923224611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 6.132067118s" Feb 13 15:57:20.931671 containerd[1756]: time="2025-02-13T15:57:20.931622668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 6.12782255s" Feb 13 15:57:20.936604 containerd[1756]: time="2025-02-13T15:57:20.936395037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 6.127586709s" Feb 13 15:57:21.118983 containerd[1756]: time="2025-02-13T15:57:21.117820596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:57:21.118983 containerd[1756]: time="2025-02-13T15:57:21.118168076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:57:21.118983 containerd[1756]: time="2025-02-13T15:57:21.118190037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:21.120784 containerd[1756]: time="2025-02-13T15:57:21.120714641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:21.125814 containerd[1756]: time="2025-02-13T15:57:21.125580491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:57:21.125956 containerd[1756]: time="2025-02-13T15:57:21.125840852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:57:21.126587 containerd[1756]: time="2025-02-13T15:57:21.125570251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:57:21.126751 containerd[1756]: time="2025-02-13T15:57:21.126720613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:57:21.126851 containerd[1756]: time="2025-02-13T15:57:21.126830734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:21.127003 containerd[1756]: time="2025-02-13T15:57:21.126939774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:21.127377 containerd[1756]: time="2025-02-13T15:57:21.127349055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:21.127600 containerd[1756]: time="2025-02-13T15:57:21.127286934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:21.152582 systemd[1]: Started cri-containerd-0b6a464f47188a5f833ecc47eaff5b3312c509a534e7870f25e25f4883324d82.scope - libcontainer container 0b6a464f47188a5f833ecc47eaff5b3312c509a534e7870f25e25f4883324d82. Feb 13 15:57:21.158836 systemd[1]: Started cri-containerd-093402ed2ffd73dbd527ddedc01c85c8c274877f92156e1b05461675b86f7b0c.scope - libcontainer container 093402ed2ffd73dbd527ddedc01c85c8c274877f92156e1b05461675b86f7b0c. Feb 13 15:57:21.161918 systemd[1]: Started cri-containerd-1dd1f663515bea077307d7f9a239f575f609dcfe4f39de59932184a6e85073af.scope - libcontainer container 1dd1f663515bea077307d7f9a239f575f609dcfe4f39de59932184a6e85073af. Feb 13 15:57:21.212043 containerd[1756]: time="2025-02-13T15:57:21.211788221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-d82c5cac77,Uid:12b95efe7a97503fb735322c4e5bd63a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b6a464f47188a5f833ecc47eaff5b3312c509a534e7870f25e25f4883324d82\"" Feb 13 15:57:21.219582 containerd[1756]: time="2025-02-13T15:57:21.219527197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-d82c5cac77,Uid:70f84c601f4ed093b81e6a7951337b88,Namespace:kube-system,Attempt:0,} returns sandbox id \"093402ed2ffd73dbd527ddedc01c85c8c274877f92156e1b05461675b86f7b0c\"" Feb 13 15:57:21.219834 containerd[1756]: time="2025-02-13T15:57:21.219546877Z" level=info msg="CreateContainer within sandbox \"0b6a464f47188a5f833ecc47eaff5b3312c509a534e7870f25e25f4883324d82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:57:21.223072 containerd[1756]: time="2025-02-13T15:57:21.222954683Z" level=info msg="CreateContainer within sandbox \"093402ed2ffd73dbd527ddedc01c85c8c274877f92156e1b05461675b86f7b0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:57:21.225546 containerd[1756]: time="2025-02-13T15:57:21.225512288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-d82c5cac77,Uid:7bd86d6e594091c921f856df971f384a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dd1f663515bea077307d7f9a239f575f609dcfe4f39de59932184a6e85073af\"" Feb 13 15:57:21.228082 containerd[1756]: time="2025-02-13T15:57:21.228042653Z" level=info msg="CreateContainer within sandbox \"1dd1f663515bea077307d7f9a239f575f609dcfe4f39de59932184a6e85073af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:57:21.318268 containerd[1756]: time="2025-02-13T15:57:21.318216872Z" level=info msg="CreateContainer within sandbox \"0b6a464f47188a5f833ecc47eaff5b3312c509a534e7870f25e25f4883324d82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fcc38e35e3bb3600360d99105ca89dae417cbb7a4285d9e2bde565c248e6f5ae\"" Feb 13 15:57:21.319331 containerd[1756]: time="2025-02-13T15:57:21.318889513Z" level=info msg="StartContainer for \"fcc38e35e3bb3600360d99105ca89dae417cbb7a4285d9e2bde565c248e6f5ae\"" Feb 13 15:57:21.328014 containerd[1756]: time="2025-02-13T15:57:21.327969171Z" level=info msg="CreateContainer within sandbox \"093402ed2ffd73dbd527ddedc01c85c8c274877f92156e1b05461675b86f7b0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a657cfcba8273b85346d56f057865f0770fc14cce5987306792a4f3e3a2a6e9\"" Feb 13 15:57:21.329237 containerd[1756]: time="2025-02-13T15:57:21.329044333Z" level=info msg="StartContainer for \"4a657cfcba8273b85346d56f057865f0770fc14cce5987306792a4f3e3a2a6e9\"" Feb 13 15:57:21.337341 containerd[1756]: time="2025-02-13T15:57:21.336913068Z" level=info msg="CreateContainer within sandbox \"1dd1f663515bea077307d7f9a239f575f609dcfe4f39de59932184a6e85073af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04e2feadd4731ff60a12acab4d297cced0b49a4f55cd4af6a286f02d319069e3\"" Feb 13 15:57:21.340600 containerd[1756]: time="2025-02-13T15:57:21.340554796Z" level=info msg="StartContainer for \"04e2feadd4731ff60a12acab4d297cced0b49a4f55cd4af6a286f02d319069e3\"" Feb 13 15:57:21.347531 systemd[1]: Started cri-containerd-fcc38e35e3bb3600360d99105ca89dae417cbb7a4285d9e2bde565c248e6f5ae.scope - libcontainer container fcc38e35e3bb3600360d99105ca89dae417cbb7a4285d9e2bde565c248e6f5ae. Feb 13 15:57:21.369612 systemd[1]: Started cri-containerd-4a657cfcba8273b85346d56f057865f0770fc14cce5987306792a4f3e3a2a6e9.scope - libcontainer container 4a657cfcba8273b85346d56f057865f0770fc14cce5987306792a4f3e3a2a6e9. Feb 13 15:57:21.382525 systemd[1]: Started cri-containerd-04e2feadd4731ff60a12acab4d297cced0b49a4f55cd4af6a286f02d319069e3.scope - libcontainer container 04e2feadd4731ff60a12acab4d297cced0b49a4f55cd4af6a286f02d319069e3. Feb 13 15:57:21.417610 containerd[1756]: time="2025-02-13T15:57:21.417567948Z" level=info msg="StartContainer for \"fcc38e35e3bb3600360d99105ca89dae417cbb7a4285d9e2bde565c248e6f5ae\" returns successfully" Feb 13 15:57:21.437115 containerd[1756]: time="2025-02-13T15:57:21.436692465Z" level=info msg="StartContainer for \"4a657cfcba8273b85346d56f057865f0770fc14cce5987306792a4f3e3a2a6e9\" returns successfully" Feb 13 15:57:21.453939 containerd[1756]: time="2025-02-13T15:57:21.453888499Z" level=info msg="StartContainer for \"04e2feadd4731ff60a12acab4d297cced0b49a4f55cd4af6a286f02d319069e3\" returns successfully" Feb 13 15:57:23.529575 kubelet[2971]: I0213 15:57:23.529524 2971 apiserver.go:52] "Watching apiserver" Feb 13 15:57:23.602648 kubelet[2971]: E0213 15:57:23.602593 2971 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.1-a-d82c5cac77.1823cfaf7178228d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-d82c5cac77,UID:ci-4152.2.1-a-d82c5cac77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-d82c5cac77,},FirstTimestamp:2025-02-13 15:57:13.525068429 +0000 UTC m=+1.694264391,LastTimestamp:2025-02-13 15:57:13.525068429 +0000 UTC m=+1.694264391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-d82c5cac77,}" Feb 13 15:57:23.635046 kubelet[2971]: I0213 15:57:23.635001 2971 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:57:23.662689 kubelet[2971]: E0213 15:57:23.662502 2971 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.1-a-d82c5cac77.1823cfaf721400f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-d82c5cac77,UID:ci-4152.2.1-a-d82c5cac77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-d82c5cac77,},FirstTimestamp:2025-02-13 15:57:13.535283449 +0000 UTC m=+1.704479411,LastTimestamp:2025-02-13 15:57:13.535283449 +0000 UTC m=+1.704479411,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-d82c5cac77,}" Feb 13 15:57:23.949640 kubelet[2971]: E0213 15:57:23.949528 2971 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.1-a-d82c5cac77" not found Feb 13 15:57:24.307940 kubelet[2971]: E0213 15:57:24.307908 2971 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.1-a-d82c5cac77" not found Feb 13 15:57:24.351101 kubelet[2971]: E0213 15:57:24.350888 2971 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.1-a-d82c5cac77\" not found" Feb 13 15:57:24.749122 kubelet[2971]: E0213 15:57:24.749088 2971 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.1-a-d82c5cac77" not found Feb 13 15:57:25.637061 kubelet[2971]: E0213 15:57:25.637025 2971 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.1-a-d82c5cac77" not found Feb 13 15:57:26.144436 kubelet[2971]: E0213 15:57:26.144395 2971 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.1-a-d82c5cac77\" not found" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:26.474524 kubelet[2971]: I0213 15:57:26.472185 2971 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:26.480459 kubelet[2971]: I0213 15:57:26.480351 2971 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:27.468334 systemd[1]: Reloading requested from client PID 3247 ('systemctl') (unit session-9.scope)... Feb 13 15:57:27.468361 systemd[1]: Reloading... Feb 13 15:57:27.488079 kubelet[2971]: W0213 15:57:27.487343 2971 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:57:27.564363 zram_generator::config[3283]: No configuration found. Feb 13 15:57:27.664360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:57:27.755853 systemd[1]: Reloading finished in 287 ms. Feb 13 15:57:27.791699 kubelet[2971]: I0213 15:57:27.791660 2971 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:57:27.793499 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:27.804621 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:57:27.804826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:27.804877 systemd[1]: kubelet.service: Consumed 1.016s CPU time, 111.5M memory peak, 0B memory swap peak. Feb 13 15:57:27.812700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:28.004856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:28.010680 (kubelet)[3351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:57:28.070965 kubelet[3351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:57:28.070965 kubelet[3351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:57:28.070965 kubelet[3351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:57:28.071346 kubelet[3351]: I0213 15:57:28.071015 3351 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:57:28.075443 kubelet[3351]: I0213 15:57:28.075388 3351 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:57:28.075443 kubelet[3351]: I0213 15:57:28.075425 3351 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:57:28.076335 kubelet[3351]: I0213 15:57:28.075645 3351 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:57:28.077848 kubelet[3351]: I0213 15:57:28.077802 3351 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:57:28.080124 kubelet[3351]: I0213 15:57:28.079970 3351 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:57:28.087047 kubelet[3351]: I0213 15:57:28.087013 3351 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:57:28.087223 kubelet[3351]: I0213 15:57:28.087215 3351 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:57:28.090113 kubelet[3351]: I0213 15:57:28.088359 3351 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:57:28.090113 kubelet[3351]: I0213 15:57:28.088413 3351 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:57:28.090113 kubelet[3351]: I0213 15:57:28.088423 3351 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:57:28.090113 kubelet[3351]: I0213 15:57:28.088468 3351 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:57:28.090113 kubelet[3351]: I0213 15:57:28.088584 3351 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:57:28.090113 kubelet[3351]: I0213 15:57:28.088601 3351 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:57:28.090113 kubelet[3351]: I0213 15:57:28.088622 3351 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:57:28.090616 kubelet[3351]: I0213 15:57:28.088636 3351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:57:28.091189 kubelet[3351]: I0213 15:57:28.090947 3351 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:57:28.091189 kubelet[3351]: I0213 15:57:28.091168 3351 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:57:28.092510 kubelet[3351]: I0213 15:57:28.092474 3351 server.go:1256] "Started kubelet" Feb 13 15:57:28.106518 kubelet[3351]: I0213 15:57:28.106476 3351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:57:28.109860 kubelet[3351]: I0213 15:57:28.109771 3351 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:57:28.110805 kubelet[3351]: I0213 15:57:28.110763 3351 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:57:28.116333 kubelet[3351]: I0213 15:57:28.111384 3351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:57:28.151636 kubelet[3351]: I0213 15:57:28.150616 3351 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:57:28.151636 kubelet[3351]: I0213 15:57:28.113556 3351 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:57:28.151636 kubelet[3351]: I0213 15:57:28.113585 3351 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:57:28.151636 kubelet[3351]: I0213 15:57:28.150822 3351 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:57:28.151636 kubelet[3351]: I0213 15:57:28.139193 3351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:57:28.153858 kubelet[3351]: E0213 15:57:28.153805 3351 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:57:28.154078 kubelet[3351]: I0213 15:57:28.154063 3351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:57:28.154150 kubelet[3351]: I0213 15:57:28.154141 3351 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:57:28.154235 kubelet[3351]: I0213 15:57:28.154226 3351 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:57:28.156227 kubelet[3351]: E0213 15:57:28.154362 3351 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:57:28.156227 kubelet[3351]: I0213 15:57:28.143695 3351 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:57:28.156227 kubelet[3351]: I0213 15:57:28.154583 3351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:57:28.167499 kubelet[3351]: I0213 15:57:28.166696 3351 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:57:28.218274 kubelet[3351]: I0213 15:57:28.218237 3351 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:57:28.218457 kubelet[3351]: I0213 15:57:28.218445 3351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:57:28.218528 kubelet[3351]: I0213 15:57:28.218519 3351 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:57:28.218789 kubelet[3351]: I0213 15:57:28.218778 3351 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:57:28.221264 kubelet[3351]: I0213 15:57:28.218873 3351 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:57:28.221264 kubelet[3351]: I0213 15:57:28.218884 3351 policy_none.go:49] "None policy: Start" Feb 13 15:57:28.221264 kubelet[3351]: I0213 15:57:28.218920 3351 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.221264 kubelet[3351]: I0213 15:57:28.220349 3351 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:57:28.221264 kubelet[3351]: I0213 15:57:28.220383 3351 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:57:28.221264 kubelet[3351]: I0213 15:57:28.220648 3351 state_mem.go:75] "Updated machine memory state" Feb 13 15:57:28.226280 kubelet[3351]: I0213 15:57:28.226249 3351 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:57:28.227508 kubelet[3351]: I0213 15:57:28.226991 3351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:57:28.232605 kubelet[3351]: I0213 15:57:28.231915 3351 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.232605 kubelet[3351]: I0213 15:57:28.232447 3351 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.254676 kubelet[3351]: I0213 15:57:28.254574 3351 topology_manager.go:215] "Topology Admit Handler" podUID="12b95efe7a97503fb735322c4e5bd63a" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.254676 kubelet[3351]: I0213 15:57:28.254668 3351 topology_manager.go:215] "Topology Admit Handler" podUID="70f84c601f4ed093b81e6a7951337b88" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.254849 kubelet[3351]: I0213 15:57:28.254726 3351 topology_manager.go:215] "Topology Admit Handler" podUID="7bd86d6e594091c921f856df971f384a" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.267516 kubelet[3351]: W0213 15:57:28.266726 3351 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:57:28.267938 kubelet[3351]: W0213 15:57:28.267711 3351 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:57:28.268087 kubelet[3351]: W0213 15:57:28.268006 3351 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:57:28.268087 kubelet[3351]: E0213 15:57:28.268063 3351 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152.2.1-a-d82c5cac77\" already exists" pod="kube-system/kube-scheduler-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.352733 kubelet[3351]: I0213 15:57:28.352392 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12b95efe7a97503fb735322c4e5bd63a-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-d82c5cac77\" (UID: \"12b95efe7a97503fb735322c4e5bd63a\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.352733 kubelet[3351]: I0213 15:57:28.352439 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12b95efe7a97503fb735322c4e5bd63a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-d82c5cac77\" (UID: \"12b95efe7a97503fb735322c4e5bd63a\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.352733 kubelet[3351]: I0213 15:57:28.352465 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.352733 kubelet[3351]: I0213 15:57:28.352494 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.352733 kubelet[3351]: I0213 15:57:28.352517 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.353006 kubelet[3351]: I0213 15:57:28.352558 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.353006 kubelet[3351]: I0213 15:57:28.352578 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12b95efe7a97503fb735322c4e5bd63a-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-d82c5cac77\" (UID: \"12b95efe7a97503fb735322c4e5bd63a\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.353006 kubelet[3351]: I0213 15:57:28.352600 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70f84c601f4ed093b81e6a7951337b88-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-d82c5cac77\" (UID: \"70f84c601f4ed093b81e6a7951337b88\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:28.353006 kubelet[3351]: I0213 15:57:28.352619 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7bd86d6e594091c921f856df971f384a-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-d82c5cac77\" (UID: \"7bd86d6e594091c921f856df971f384a\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-d82c5cac77" Feb 13 15:57:29.091311 kubelet[3351]: I0213 15:57:29.091267 3351 apiserver.go:52] "Watching apiserver" Feb 13 15:57:29.153403 kubelet[3351]: I0213 15:57:29.153342 3351 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:57:29.245238 kubelet[3351]: I0213 15:57:29.245184 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.1-a-d82c5cac77" podStartSLOduration=2.24511881 podStartE2EDuration="2.24511881s" podCreationTimestamp="2025-02-13 15:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:57:29.220439281 +0000 UTC m=+1.205729863" watchObservedRunningTime="2025-02-13 15:57:29.24511881 +0000 UTC m=+1.230409392" Feb 13 15:57:29.299486 kubelet[3351]: I0213 15:57:29.299359 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.1-a-d82c5cac77" podStartSLOduration=1.299312317 podStartE2EDuration="1.299312317s" podCreationTimestamp="2025-02-13 15:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:57:29.247282814 +0000 UTC m=+1.232573396" watchObservedRunningTime="2025-02-13 15:57:29.299312317 +0000 UTC m=+1.284602899" Feb 13 15:57:29.402831 kubelet[3351]: I0213 15:57:29.402630 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.1-a-d82c5cac77" podStartSLOduration=1.402588361 podStartE2EDuration="1.402588361s" podCreationTimestamp="2025-02-13 15:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:57:29.303512325 +0000 UTC m=+1.288802907" watchObservedRunningTime="2025-02-13 15:57:29.402588361 +0000 UTC m=+1.387878943" Feb 13 15:57:32.700943 sudo[2257]: pam_unix(sudo:session): session closed for user root Feb 13 15:57:32.787110 sshd[2256]: Connection closed by 10.200.16.10 port 52424 Feb 13 15:57:32.787757 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Feb 13 15:57:32.790814 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:52424.service: Deactivated successfully. Feb 13 15:57:32.792945 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:57:32.793752 systemd[1]: session-9.scope: Consumed 6.635s CPU time, 188.2M memory peak, 0B memory swap peak. Feb 13 15:57:32.796196 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:57:32.798753 systemd-logind[1709]: Removed session 9. Feb 13 15:57:41.445712 kubelet[3351]: I0213 15:57:41.445596 3351 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:57:41.447273 containerd[1756]: time="2025-02-13T15:57:41.446680897Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:57:41.447837 kubelet[3351]: I0213 15:57:41.446907 3351 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:57:42.353538 kubelet[3351]: I0213 15:57:42.352075 3351 topology_manager.go:215] "Topology Admit Handler" podUID="cebe2c1d-36ee-4a2b-b1fa-404e5373602c" podNamespace="kube-system" podName="kube-proxy-x292c" Feb 13 15:57:42.362550 systemd[1]: Created slice kubepods-besteffort-podcebe2c1d_36ee_4a2b_b1fa_404e5373602c.slice - libcontainer container kubepods-besteffort-podcebe2c1d_36ee_4a2b_b1fa_404e5373602c.slice. Feb 13 15:57:42.442608 kubelet[3351]: I0213 15:57:42.442541 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cebe2c1d-36ee-4a2b-b1fa-404e5373602c-lib-modules\") pod \"kube-proxy-x292c\" (UID: \"cebe2c1d-36ee-4a2b-b1fa-404e5373602c\") " pod="kube-system/kube-proxy-x292c" Feb 13 15:57:42.442608 kubelet[3351]: I0213 15:57:42.442595 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cebe2c1d-36ee-4a2b-b1fa-404e5373602c-xtables-lock\") pod \"kube-proxy-x292c\" (UID: \"cebe2c1d-36ee-4a2b-b1fa-404e5373602c\") " pod="kube-system/kube-proxy-x292c" Feb 13 15:57:42.442608 kubelet[3351]: I0213 15:57:42.442618 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cebe2c1d-36ee-4a2b-b1fa-404e5373602c-kube-proxy\") pod \"kube-proxy-x292c\" (UID: \"cebe2c1d-36ee-4a2b-b1fa-404e5373602c\") " pod="kube-system/kube-proxy-x292c" Feb 13 15:57:42.442862 kubelet[3351]: I0213 15:57:42.442643 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnzgv\" (UniqueName: \"kubernetes.io/projected/cebe2c1d-36ee-4a2b-b1fa-404e5373602c-kube-api-access-cnzgv\") pod \"kube-proxy-x292c\" (UID: \"cebe2c1d-36ee-4a2b-b1fa-404e5373602c\") " pod="kube-system/kube-proxy-x292c" Feb 13 15:57:42.531333 kubelet[3351]: I0213 15:57:42.529837 3351 topology_manager.go:215] "Topology Admit Handler" podUID="5969fd64-66ca-4ba6-b65b-f95f5407a11e" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-pdxhb" Feb 13 15:57:42.541277 systemd[1]: Created slice kubepods-besteffort-pod5969fd64_66ca_4ba6_b65b_f95f5407a11e.slice - libcontainer container kubepods-besteffort-pod5969fd64_66ca_4ba6_b65b_f95f5407a11e.slice. Feb 13 15:57:42.543332 kubelet[3351]: I0213 15:57:42.542834 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5969fd64-66ca-4ba6-b65b-f95f5407a11e-var-lib-calico\") pod \"tigera-operator-c7ccbd65-pdxhb\" (UID: \"5969fd64-66ca-4ba6-b65b-f95f5407a11e\") " pod="tigera-operator/tigera-operator-c7ccbd65-pdxhb" Feb 13 15:57:42.543332 kubelet[3351]: I0213 15:57:42.542872 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpzt\" (UniqueName: \"kubernetes.io/projected/5969fd64-66ca-4ba6-b65b-f95f5407a11e-kube-api-access-vbpzt\") pod \"tigera-operator-c7ccbd65-pdxhb\" (UID: \"5969fd64-66ca-4ba6-b65b-f95f5407a11e\") " pod="tigera-operator/tigera-operator-c7ccbd65-pdxhb" Feb 13 15:57:42.672461 containerd[1756]: time="2025-02-13T15:57:42.672329915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x292c,Uid:cebe2c1d-36ee-4a2b-b1fa-404e5373602c,Namespace:kube-system,Attempt:0,}" Feb 13 15:57:42.753463 containerd[1756]: time="2025-02-13T15:57:42.753330644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:57:42.753463 containerd[1756]: time="2025-02-13T15:57:42.753401284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:57:42.753463 containerd[1756]: time="2025-02-13T15:57:42.753413284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:42.754523 containerd[1756]: time="2025-02-13T15:57:42.753510324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:42.778554 systemd[1]: Started cri-containerd-15d939fb85829895ffe29f7e746e2e07ce7440ca7f839d067648f349c13cd743.scope - libcontainer container 15d939fb85829895ffe29f7e746e2e07ce7440ca7f839d067648f349c13cd743. Feb 13 15:57:42.802985 containerd[1756]: time="2025-02-13T15:57:42.802907507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x292c,Uid:cebe2c1d-36ee-4a2b-b1fa-404e5373602c,Namespace:kube-system,Attempt:0,} returns sandbox id \"15d939fb85829895ffe29f7e746e2e07ce7440ca7f839d067648f349c13cd743\"" Feb 13 15:57:42.807631 containerd[1756]: time="2025-02-13T15:57:42.807474197Z" level=info msg="CreateContainer within sandbox \"15d939fb85829895ffe29f7e746e2e07ce7440ca7f839d067648f349c13cd743\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:57:42.847734 containerd[1756]: time="2025-02-13T15:57:42.847689681Z" level=info msg="CreateContainer within sandbox \"15d939fb85829895ffe29f7e746e2e07ce7440ca7f839d067648f349c13cd743\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0ef25a8d2029304ccab17c5236d3a0bc65e6686c113c54eff0530bea05698ce\"" Feb 13 15:57:42.848845 containerd[1756]: time="2025-02-13T15:57:42.848548443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-pdxhb,Uid:5969fd64-66ca-4ba6-b65b-f95f5407a11e,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:57:42.848845 containerd[1756]: time="2025-02-13T15:57:42.848599843Z" level=info msg="StartContainer for \"f0ef25a8d2029304ccab17c5236d3a0bc65e6686c113c54eff0530bea05698ce\"" Feb 13 15:57:42.875572 systemd[1]: Started cri-containerd-f0ef25a8d2029304ccab17c5236d3a0bc65e6686c113c54eff0530bea05698ce.scope - libcontainer container f0ef25a8d2029304ccab17c5236d3a0bc65e6686c113c54eff0530bea05698ce. Feb 13 15:57:42.900932 containerd[1756]: time="2025-02-13T15:57:42.900568191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:57:42.900932 containerd[1756]: time="2025-02-13T15:57:42.900636951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:57:42.900932 containerd[1756]: time="2025-02-13T15:57:42.900698872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:42.901141 containerd[1756]: time="2025-02-13T15:57:42.900795152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:42.917027 containerd[1756]: time="2025-02-13T15:57:42.916717465Z" level=info msg="StartContainer for \"f0ef25a8d2029304ccab17c5236d3a0bc65e6686c113c54eff0530bea05698ce\" returns successfully" Feb 13 15:57:42.925559 systemd[1]: Started cri-containerd-0264717e55326ebb0b663d52a6a593873c9c98b78bf48f140f4aab242d1adfc2.scope - libcontainer container 0264717e55326ebb0b663d52a6a593873c9c98b78bf48f140f4aab242d1adfc2. Feb 13 15:57:42.964352 containerd[1756]: time="2025-02-13T15:57:42.964291084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-pdxhb,Uid:5969fd64-66ca-4ba6-b65b-f95f5407a11e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0264717e55326ebb0b663d52a6a593873c9c98b78bf48f140f4aab242d1adfc2\"" Feb 13 15:57:42.967935 containerd[1756]: time="2025-02-13T15:57:42.967782492Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:57:43.229395 kubelet[3351]: I0213 15:57:43.228740 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x292c" podStartSLOduration=1.228695076 podStartE2EDuration="1.228695076s" podCreationTimestamp="2025-02-13 15:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:57:43.228446276 +0000 UTC m=+15.213736858" watchObservedRunningTime="2025-02-13 15:57:43.228695076 +0000 UTC m=+15.213985658" Feb 13 15:57:44.423890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583392519.mount: Deactivated successfully. Feb 13 15:57:45.122349 containerd[1756]: time="2025-02-13T15:57:45.121824828Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:45.124723 containerd[1756]: time="2025-02-13T15:57:45.124496554Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:57:45.128241 containerd[1756]: time="2025-02-13T15:57:45.128152681Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:45.134882 containerd[1756]: time="2025-02-13T15:57:45.134811655Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:45.136139 containerd[1756]: time="2025-02-13T15:57:45.135591217Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.167659245s" Feb 13 15:57:45.136139 containerd[1756]: time="2025-02-13T15:57:45.135630697Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:57:45.137729 containerd[1756]: time="2025-02-13T15:57:45.137689901Z" level=info msg="CreateContainer within sandbox \"0264717e55326ebb0b663d52a6a593873c9c98b78bf48f140f4aab242d1adfc2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:57:45.180615 containerd[1756]: time="2025-02-13T15:57:45.180554711Z" level=info msg="CreateContainer within sandbox \"0264717e55326ebb0b663d52a6a593873c9c98b78bf48f140f4aab242d1adfc2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"285106d58e0aa382a4dd1a926680b015046091057729c94e6ef0bed873456aeb\"" Feb 13 15:57:45.181833 containerd[1756]: time="2025-02-13T15:57:45.181550993Z" level=info msg="StartContainer for \"285106d58e0aa382a4dd1a926680b015046091057729c94e6ef0bed873456aeb\"" Feb 13 15:57:45.207903 systemd[1]: run-containerd-runc-k8s.io-285106d58e0aa382a4dd1a926680b015046091057729c94e6ef0bed873456aeb-runc.HulAUV.mount: Deactivated successfully. Feb 13 15:57:45.214574 systemd[1]: Started cri-containerd-285106d58e0aa382a4dd1a926680b015046091057729c94e6ef0bed873456aeb.scope - libcontainer container 285106d58e0aa382a4dd1a926680b015046091057729c94e6ef0bed873456aeb. Feb 13 15:57:45.248169 containerd[1756]: time="2025-02-13T15:57:45.248098692Z" level=info msg="StartContainer for \"285106d58e0aa382a4dd1a926680b015046091057729c94e6ef0bed873456aeb\" returns successfully" Feb 13 15:57:48.169461 kubelet[3351]: I0213 15:57:48.169103 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-pdxhb" podStartSLOduration=4.000087222 podStartE2EDuration="6.169059949s" podCreationTimestamp="2025-02-13 15:57:42 +0000 UTC" firstStartedPulling="2025-02-13 15:57:42.96696361 +0000 UTC m=+14.952254192" lastFinishedPulling="2025-02-13 15:57:45.135936337 +0000 UTC m=+17.121226919" observedRunningTime="2025-02-13 15:57:46.23918428 +0000 UTC m=+18.224474822" watchObservedRunningTime="2025-02-13 15:57:48.169059949 +0000 UTC m=+20.154350531" Feb 13 15:57:50.966960 kubelet[3351]: I0213 15:57:50.965678 3351 topology_manager.go:215] "Topology Admit Handler" podUID="23c6269e-271b-488d-99de-72d272399a6b" podNamespace="calico-system" podName="calico-typha-57f479dc7f-m9lcf" Feb 13 15:57:50.977388 systemd[1]: Created slice kubepods-besteffort-pod23c6269e_271b_488d_99de_72d272399a6b.slice - libcontainer container kubepods-besteffort-pod23c6269e_271b_488d_99de_72d272399a6b.slice. Feb 13 15:57:50.997753 kubelet[3351]: I0213 15:57:50.997571 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/23c6269e-271b-488d-99de-72d272399a6b-typha-certs\") pod \"calico-typha-57f479dc7f-m9lcf\" (UID: \"23c6269e-271b-488d-99de-72d272399a6b\") " pod="calico-system/calico-typha-57f479dc7f-m9lcf" Feb 13 15:57:50.997753 kubelet[3351]: I0213 15:57:50.997623 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpk9n\" (UniqueName: \"kubernetes.io/projected/23c6269e-271b-488d-99de-72d272399a6b-kube-api-access-rpk9n\") pod \"calico-typha-57f479dc7f-m9lcf\" (UID: \"23c6269e-271b-488d-99de-72d272399a6b\") " pod="calico-system/calico-typha-57f479dc7f-m9lcf" Feb 13 15:57:50.997753 kubelet[3351]: I0213 15:57:50.997651 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23c6269e-271b-488d-99de-72d272399a6b-tigera-ca-bundle\") pod \"calico-typha-57f479dc7f-m9lcf\" (UID: \"23c6269e-271b-488d-99de-72d272399a6b\") " pod="calico-system/calico-typha-57f479dc7f-m9lcf" Feb 13 15:57:51.144141 kubelet[3351]: I0213 15:57:51.143407 3351 topology_manager.go:215] "Topology Admit Handler" podUID="2d74574d-b29c-44f5-a433-68701a3df0e3" podNamespace="calico-system" podName="calico-node-nmj8t" Feb 13 15:57:51.155907 systemd[1]: Created slice kubepods-besteffort-pod2d74574d_b29c_44f5_a433_68701a3df0e3.slice - libcontainer container kubepods-besteffort-pod2d74574d_b29c_44f5_a433_68701a3df0e3.slice. Feb 13 15:57:51.199276 kubelet[3351]: I0213 15:57:51.199181 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-cni-bin-dir\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199276 kubelet[3351]: I0213 15:57:51.199243 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtz2f\" (UniqueName: \"kubernetes.io/projected/2d74574d-b29c-44f5-a433-68701a3df0e3-kube-api-access-vtz2f\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199276 kubelet[3351]: I0213 15:57:51.199306 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d74574d-b29c-44f5-a433-68701a3df0e3-tigera-ca-bundle\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199669 kubelet[3351]: I0213 15:57:51.199358 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-var-run-calico\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199669 kubelet[3351]: I0213 15:57:51.199382 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-var-lib-calico\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199669 kubelet[3351]: I0213 15:57:51.199433 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-xtables-lock\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199669 kubelet[3351]: I0213 15:57:51.199458 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-flexvol-driver-host\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199669 kubelet[3351]: I0213 15:57:51.199488 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-lib-modules\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199774 kubelet[3351]: I0213 15:57:51.199525 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2d74574d-b29c-44f5-a433-68701a3df0e3-node-certs\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199774 kubelet[3351]: I0213 15:57:51.199551 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-cni-net-dir\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199774 kubelet[3351]: I0213 15:57:51.199580 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-cni-log-dir\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.199774 kubelet[3351]: I0213 15:57:51.199602 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2d74574d-b29c-44f5-a433-68701a3df0e3-policysync\") pod \"calico-node-nmj8t\" (UID: \"2d74574d-b29c-44f5-a433-68701a3df0e3\") " pod="calico-system/calico-node-nmj8t" Feb 13 15:57:51.283004 containerd[1756]: time="2025-02-13T15:57:51.282934882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f479dc7f-m9lcf,Uid:23c6269e-271b-488d-99de-72d272399a6b,Namespace:calico-system,Attempt:0,}" Feb 13 15:57:51.302030 kubelet[3351]: E0213 15:57:51.301715 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.302030 kubelet[3351]: W0213 15:57:51.301762 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.302030 kubelet[3351]: E0213 15:57:51.301785 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.302030 kubelet[3351]: E0213 15:57:51.301991 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.302030 kubelet[3351]: W0213 15:57:51.302004 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.302030 kubelet[3351]: E0213 15:57:51.302017 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.303925 kubelet[3351]: E0213 15:57:51.302374 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.303925 kubelet[3351]: W0213 15:57:51.302387 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.303925 kubelet[3351]: E0213 15:57:51.302426 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.303925 kubelet[3351]: E0213 15:57:51.302699 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.303925 kubelet[3351]: W0213 15:57:51.302710 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.303925 kubelet[3351]: E0213 15:57:51.302722 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.310729 kubelet[3351]: E0213 15:57:51.310595 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.310729 kubelet[3351]: W0213 15:57:51.310626 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.310729 kubelet[3351]: E0213 15:57:51.310650 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.311347 kubelet[3351]: E0213 15:57:51.311192 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.311347 kubelet[3351]: W0213 15:57:51.311208 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.311347 kubelet[3351]: E0213 15:57:51.311227 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.315860 kubelet[3351]: E0213 15:57:51.315663 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.315860 kubelet[3351]: W0213 15:57:51.315691 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.315860 kubelet[3351]: E0213 15:57:51.315719 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.316332 kubelet[3351]: E0213 15:57:51.316100 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.316332 kubelet[3351]: W0213 15:57:51.316114 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.316332 kubelet[3351]: E0213 15:57:51.316143 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.316638 kubelet[3351]: E0213 15:57:51.316605 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.316729 kubelet[3351]: W0213 15:57:51.316711 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.316800 kubelet[3351]: E0213 15:57:51.316791 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.320124 kubelet[3351]: E0213 15:57:51.320087 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.320363 kubelet[3351]: W0213 15:57:51.320254 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.320363 kubelet[3351]: E0213 15:57:51.320285 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.322449 kubelet[3351]: E0213 15:57:51.322315 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.322449 kubelet[3351]: W0213 15:57:51.322341 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.322449 kubelet[3351]: E0213 15:57:51.322368 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.323717 kubelet[3351]: E0213 15:57:51.323525 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.323717 kubelet[3351]: W0213 15:57:51.323560 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.323717 kubelet[3351]: E0213 15:57:51.323585 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.323946 kubelet[3351]: E0213 15:57:51.323917 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.323946 kubelet[3351]: W0213 15:57:51.323935 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.324073 kubelet[3351]: E0213 15:57:51.324051 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.324757 kubelet[3351]: E0213 15:57:51.324721 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.324757 kubelet[3351]: W0213 15:57:51.324745 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.330340 kubelet[3351]: E0213 15:57:51.324856 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.341419 kubelet[3351]: E0213 15:57:51.340181 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.341419 kubelet[3351]: W0213 15:57:51.341389 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.344078 kubelet[3351]: E0213 15:57:51.341434 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.356342 containerd[1756]: time="2025-02-13T15:57:51.352902301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:57:51.356342 containerd[1756]: time="2025-02-13T15:57:51.355328666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:57:51.356342 containerd[1756]: time="2025-02-13T15:57:51.355344186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:51.356342 containerd[1756]: time="2025-02-13T15:57:51.355448266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:51.361592 kubelet[3351]: I0213 15:57:51.358693 3351 topology_manager.go:215] "Topology Admit Handler" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" podNamespace="calico-system" podName="csi-node-driver-qbpxc" Feb 13 15:57:51.361592 kubelet[3351]: E0213 15:57:51.358987 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:57:51.384534 systemd[1]: Started cri-containerd-7da4511bbc119bc28c4ddc4996ba8ce0d7a9eddc8c4b095ec1b45845725136ff.scope - libcontainer container 7da4511bbc119bc28c4ddc4996ba8ce0d7a9eddc8c4b095ec1b45845725136ff. Feb 13 15:57:51.397644 kubelet[3351]: E0213 15:57:51.397159 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.397644 kubelet[3351]: W0213 15:57:51.397323 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.397644 kubelet[3351]: E0213 15:57:51.397353 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.399149 kubelet[3351]: E0213 15:57:51.398663 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.399149 kubelet[3351]: W0213 15:57:51.398684 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.399149 kubelet[3351]: E0213 15:57:51.398711 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.399149 kubelet[3351]: E0213 15:57:51.399010 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.399149 kubelet[3351]: W0213 15:57:51.399030 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.399149 kubelet[3351]: E0213 15:57:51.399045 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.399506 kubelet[3351]: E0213 15:57:51.399429 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.399506 kubelet[3351]: W0213 15:57:51.399444 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.399506 kubelet[3351]: E0213 15:57:51.399460 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.400205 kubelet[3351]: E0213 15:57:51.400169 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.400205 kubelet[3351]: W0213 15:57:51.400197 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.400386 kubelet[3351]: E0213 15:57:51.400214 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.400681 kubelet[3351]: E0213 15:57:51.400656 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.400681 kubelet[3351]: W0213 15:57:51.400674 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.400874 kubelet[3351]: E0213 15:57:51.400690 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.401400 kubelet[3351]: E0213 15:57:51.401105 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.401400 kubelet[3351]: W0213 15:57:51.401392 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.401517 kubelet[3351]: E0213 15:57:51.401413 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.402388 kubelet[3351]: E0213 15:57:51.402292 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.402388 kubelet[3351]: W0213 15:57:51.402351 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.402388 kubelet[3351]: E0213 15:57:51.402368 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.402986 kubelet[3351]: E0213 15:57:51.402955 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.402986 kubelet[3351]: W0213 15:57:51.402970 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.403553 kubelet[3351]: E0213 15:57:51.403425 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.403553 kubelet[3351]: W0213 15:57:51.403440 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.403553 kubelet[3351]: E0213 15:57:51.403456 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.403672 kubelet[3351]: E0213 15:57:51.403629 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.403694 kubelet[3351]: I0213 15:57:51.403672 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d8778e0-23a8-47a6-b01b-5b701fc009d0-kubelet-dir\") pod \"csi-node-driver-qbpxc\" (UID: \"2d8778e0-23a8-47a6-b01b-5b701fc009d0\") " pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:57:51.403873 kubelet[3351]: E0213 15:57:51.403776 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.403873 kubelet[3351]: W0213 15:57:51.403786 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.403873 kubelet[3351]: E0213 15:57:51.403821 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.404408 kubelet[3351]: E0213 15:57:51.404202 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.404408 kubelet[3351]: W0213 15:57:51.404238 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.404408 kubelet[3351]: E0213 15:57:51.404270 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.405775 kubelet[3351]: E0213 15:57:51.405566 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.405775 kubelet[3351]: W0213 15:57:51.405597 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.405775 kubelet[3351]: E0213 15:57:51.405631 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.406593 kubelet[3351]: E0213 15:57:51.406265 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.406593 kubelet[3351]: W0213 15:57:51.406284 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.406991 kubelet[3351]: E0213 15:57:51.406780 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.408417 kubelet[3351]: E0213 15:57:51.408245 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.408417 kubelet[3351]: W0213 15:57:51.408286 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.408417 kubelet[3351]: E0213 15:57:51.408375 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.408417 kubelet[3351]: I0213 15:57:51.408420 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d8778e0-23a8-47a6-b01b-5b701fc009d0-varrun\") pod \"csi-node-driver-qbpxc\" (UID: \"2d8778e0-23a8-47a6-b01b-5b701fc009d0\") " pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:57:51.409807 kubelet[3351]: E0213 15:57:51.409672 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.409807 kubelet[3351]: W0213 15:57:51.409697 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.409807 kubelet[3351]: E0213 15:57:51.409747 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.410472 kubelet[3351]: E0213 15:57:51.410010 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.410472 kubelet[3351]: W0213 15:57:51.410024 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.411084 kubelet[3351]: E0213 15:57:51.410741 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.411677 kubelet[3351]: E0213 15:57:51.411393 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.411677 kubelet[3351]: W0213 15:57:51.411413 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.411943 kubelet[3351]: E0213 15:57:51.411880 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.412370 kubelet[3351]: E0213 15:57:51.412143 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.412370 kubelet[3351]: W0213 15:57:51.412266 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.412810 kubelet[3351]: E0213 15:57:51.412474 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.412810 kubelet[3351]: W0213 15:57:51.412489 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.412810 kubelet[3351]: E0213 15:57:51.412505 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.412810 kubelet[3351]: E0213 15:57:51.412639 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.412810 kubelet[3351]: W0213 15:57:51.412647 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.412810 kubelet[3351]: E0213 15:57:51.412658 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.412810 kubelet[3351]: E0213 15:57:51.412663 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.412810 kubelet[3351]: E0213 15:57:51.412800 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.412810 kubelet[3351]: W0213 15:57:51.412808 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.412810 kubelet[3351]: E0213 15:57:51.412829 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.414244 kubelet[3351]: E0213 15:57:51.413955 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.414244 kubelet[3351]: W0213 15:57:51.413975 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.414244 kubelet[3351]: E0213 15:57:51.414001 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.414807 kubelet[3351]: E0213 15:57:51.414251 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.414807 kubelet[3351]: W0213 15:57:51.414264 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.414807 kubelet[3351]: E0213 15:57:51.414279 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.414807 kubelet[3351]: E0213 15:57:51.414444 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.414807 kubelet[3351]: W0213 15:57:51.414453 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.414807 kubelet[3351]: E0213 15:57:51.414466 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.414807 kubelet[3351]: E0213 15:57:51.414586 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.414807 kubelet[3351]: W0213 15:57:51.414593 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.414807 kubelet[3351]: E0213 15:57:51.414602 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.430933 containerd[1756]: time="2025-02-13T15:57:51.430119614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f479dc7f-m9lcf,Uid:23c6269e-271b-488d-99de-72d272399a6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7da4511bbc119bc28c4ddc4996ba8ce0d7a9eddc8c4b095ec1b45845725136ff\"" Feb 13 15:57:51.433796 containerd[1756]: time="2025-02-13T15:57:51.433748021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:57:51.463962 containerd[1756]: time="2025-02-13T15:57:51.463516480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nmj8t,Uid:2d74574d-b29c-44f5-a433-68701a3df0e3,Namespace:calico-system,Attempt:0,}" Feb 13 15:57:51.509809 kubelet[3351]: E0213 15:57:51.509779 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.510270 kubelet[3351]: W0213 15:57:51.510117 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.510270 kubelet[3351]: E0213 15:57:51.510155 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.511643 kubelet[3351]: E0213 15:57:51.511292 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.511643 kubelet[3351]: W0213 15:57:51.511333 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.511643 kubelet[3351]: E0213 15:57:51.511370 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.511643 kubelet[3351]: I0213 15:57:51.511405 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d8778e0-23a8-47a6-b01b-5b701fc009d0-socket-dir\") pod \"csi-node-driver-qbpxc\" (UID: \"2d8778e0-23a8-47a6-b01b-5b701fc009d0\") " pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:57:51.512264 kubelet[3351]: E0213 15:57:51.511672 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.512264 kubelet[3351]: W0213 15:57:51.511694 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.512264 kubelet[3351]: E0213 15:57:51.511719 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.512264 kubelet[3351]: E0213 15:57:51.512130 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.512264 kubelet[3351]: W0213 15:57:51.512144 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.512264 kubelet[3351]: E0213 15:57:51.512162 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.512899 kubelet[3351]: E0213 15:57:51.512363 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.512899 kubelet[3351]: W0213 15:57:51.512373 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.512899 kubelet[3351]: E0213 15:57:51.512388 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.513331 kubelet[3351]: E0213 15:57:51.513280 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.513643 kubelet[3351]: W0213 15:57:51.513329 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.513643 kubelet[3351]: E0213 15:57:51.513452 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.514472 kubelet[3351]: E0213 15:57:51.514439 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.514472 kubelet[3351]: W0213 15:57:51.514464 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.515120 kubelet[3351]: E0213 15:57:51.514622 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.515120 kubelet[3351]: E0213 15:57:51.515105 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.515120 kubelet[3351]: W0213 15:57:51.515119 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.515203 kubelet[3351]: E0213 15:57:51.515144 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.516561 kubelet[3351]: E0213 15:57:51.516522 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.516561 kubelet[3351]: W0213 15:57:51.516545 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.516561 kubelet[3351]: E0213 15:57:51.516622 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.517627 kubelet[3351]: E0213 15:57:51.517030 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.517627 kubelet[3351]: W0213 15:57:51.517050 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.517627 kubelet[3351]: E0213 15:57:51.517259 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.517627 kubelet[3351]: I0213 15:57:51.517452 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d8778e0-23a8-47a6-b01b-5b701fc009d0-registration-dir\") pod \"csi-node-driver-qbpxc\" (UID: \"2d8778e0-23a8-47a6-b01b-5b701fc009d0\") " pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:57:51.517627 kubelet[3351]: E0213 15:57:51.517524 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.517627 kubelet[3351]: W0213 15:57:51.517536 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.517627 kubelet[3351]: E0213 15:57:51.517552 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.518156 kubelet[3351]: E0213 15:57:51.518123 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.518156 kubelet[3351]: W0213 15:57:51.518143 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.518156 kubelet[3351]: E0213 15:57:51.518166 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.519571 kubelet[3351]: E0213 15:57:51.519528 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.519571 kubelet[3351]: W0213 15:57:51.519557 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.519830 kubelet[3351]: E0213 15:57:51.519589 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.520853 kubelet[3351]: E0213 15:57:51.520815 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.520853 kubelet[3351]: W0213 15:57:51.520847 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.521526 kubelet[3351]: E0213 15:57:51.521024 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.521805 kubelet[3351]: E0213 15:57:51.521772 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.521805 kubelet[3351]: W0213 15:57:51.521796 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.522029 kubelet[3351]: E0213 15:57:51.521842 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.522290 kubelet[3351]: I0213 15:57:51.522201 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q7ww\" (UniqueName: \"kubernetes.io/projected/2d8778e0-23a8-47a6-b01b-5b701fc009d0-kube-api-access-2q7ww\") pod \"csi-node-driver-qbpxc\" (UID: \"2d8778e0-23a8-47a6-b01b-5b701fc009d0\") " pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:57:51.522471 kubelet[3351]: E0213 15:57:51.522450 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.522598 kubelet[3351]: W0213 15:57:51.522470 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.522598 kubelet[3351]: E0213 15:57:51.522501 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.523680 kubelet[3351]: E0213 15:57:51.523650 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.523680 kubelet[3351]: W0213 15:57:51.523681 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.523945 kubelet[3351]: E0213 15:57:51.523710 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.524397 kubelet[3351]: E0213 15:57:51.524359 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.524397 kubelet[3351]: W0213 15:57:51.524380 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.524492 kubelet[3351]: E0213 15:57:51.524401 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.525143 kubelet[3351]: E0213 15:57:51.525024 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.525187 kubelet[3351]: W0213 15:57:51.525141 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.525187 kubelet[3351]: E0213 15:57:51.525162 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.526127 containerd[1756]: time="2025-02-13T15:57:51.525897404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:57:51.526242 containerd[1756]: time="2025-02-13T15:57:51.526123164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:57:51.526805 containerd[1756]: time="2025-02-13T15:57:51.526616805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:51.526978 containerd[1756]: time="2025-02-13T15:57:51.526746965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:57:51.543191 systemd[1]: Started cri-containerd-2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0.scope - libcontainer container 2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0. Feb 13 15:57:51.569995 containerd[1756]: time="2025-02-13T15:57:51.569945531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nmj8t,Uid:2d74574d-b29c-44f5-a433-68701a3df0e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0\"" Feb 13 15:57:51.623481 kubelet[3351]: E0213 15:57:51.623444 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.623729 kubelet[3351]: W0213 15:57:51.623660 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.623729 kubelet[3351]: E0213 15:57:51.623689 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.624003 kubelet[3351]: E0213 15:57:51.623985 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.624003 kubelet[3351]: W0213 15:57:51.624002 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.624126 kubelet[3351]: E0213 15:57:51.624022 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.624186 kubelet[3351]: E0213 15:57:51.624169 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.624186 kubelet[3351]: W0213 15:57:51.624181 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.624277 kubelet[3351]: E0213 15:57:51.624198 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.624359 kubelet[3351]: E0213 15:57:51.624343 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.624359 kubelet[3351]: W0213 15:57:51.624357 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.624359 kubelet[3351]: E0213 15:57:51.624374 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.624547 kubelet[3351]: E0213 15:57:51.624532 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.624547 kubelet[3351]: W0213 15:57:51.624544 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.624901 kubelet[3351]: E0213 15:57:51.624560 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.625186 kubelet[3351]: E0213 15:57:51.625088 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.625186 kubelet[3351]: W0213 15:57:51.625122 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.625186 kubelet[3351]: E0213 15:57:51.625143 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.625803 kubelet[3351]: E0213 15:57:51.625679 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.625803 kubelet[3351]: W0213 15:57:51.625702 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.625803 kubelet[3351]: E0213 15:57:51.625724 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.626082 kubelet[3351]: E0213 15:57:51.626064 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.626082 kubelet[3351]: W0213 15:57:51.626081 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.626262 kubelet[3351]: E0213 15:57:51.626104 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.626853 kubelet[3351]: E0213 15:57:51.626836 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.627023 kubelet[3351]: W0213 15:57:51.626924 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.627023 kubelet[3351]: E0213 15:57:51.626968 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.627361 kubelet[3351]: E0213 15:57:51.627290 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.627361 kubelet[3351]: W0213 15:57:51.627330 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.627463 kubelet[3351]: E0213 15:57:51.627444 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.627686 kubelet[3351]: E0213 15:57:51.627667 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.627686 kubelet[3351]: W0213 15:57:51.627686 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.627780 kubelet[3351]: E0213 15:57:51.627707 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.627875 kubelet[3351]: E0213 15:57:51.627857 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.627875 kubelet[3351]: W0213 15:57:51.627872 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.627944 kubelet[3351]: E0213 15:57:51.627891 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.628199 kubelet[3351]: E0213 15:57:51.628185 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.628481 kubelet[3351]: W0213 15:57:51.628339 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.628481 kubelet[3351]: E0213 15:57:51.628377 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.628799 kubelet[3351]: E0213 15:57:51.628785 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.628978 kubelet[3351]: W0213 15:57:51.628867 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.628978 kubelet[3351]: E0213 15:57:51.628899 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.629252 kubelet[3351]: E0213 15:57:51.629240 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.629409 kubelet[3351]: W0213 15:57:51.629365 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.629409 kubelet[3351]: E0213 15:57:51.629387 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:51.638161 kubelet[3351]: E0213 15:57:51.638132 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:51.638331 kubelet[3351]: W0213 15:57:51.638202 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:51.638331 kubelet[3351]: E0213 15:57:51.638227 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:52.890507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610327788.mount: Deactivated successfully. Feb 13 15:57:53.154948 kubelet[3351]: E0213 15:57:53.154818 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:57:53.548051 containerd[1756]: time="2025-02-13T15:57:53.547338052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:53.550366 containerd[1756]: time="2025-02-13T15:57:53.550291538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:57:53.556183 containerd[1756]: time="2025-02-13T15:57:53.556135590Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:53.562520 containerd[1756]: time="2025-02-13T15:57:53.562473202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:53.563711 containerd[1756]: time="2025-02-13T15:57:53.563192924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.129401703s" Feb 13 15:57:53.563711 containerd[1756]: time="2025-02-13T15:57:53.563230884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:57:53.565058 containerd[1756]: time="2025-02-13T15:57:53.565017967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:57:53.578732 containerd[1756]: time="2025-02-13T15:57:53.578616754Z" level=info msg="CreateContainer within sandbox \"7da4511bbc119bc28c4ddc4996ba8ce0d7a9eddc8c4b095ec1b45845725136ff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:57:53.661847 containerd[1756]: time="2025-02-13T15:57:53.661786879Z" level=info msg="CreateContainer within sandbox \"7da4511bbc119bc28c4ddc4996ba8ce0d7a9eddc8c4b095ec1b45845725136ff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9684b1c21604edbe4faaa08c2ea2808421a8494bcb79e704ee2a5667815194b7\"" Feb 13 15:57:53.663370 containerd[1756]: time="2025-02-13T15:57:53.663333722Z" level=info msg="StartContainer for \"9684b1c21604edbe4faaa08c2ea2808421a8494bcb79e704ee2a5667815194b7\"" Feb 13 15:57:53.697511 systemd[1]: Started cri-containerd-9684b1c21604edbe4faaa08c2ea2808421a8494bcb79e704ee2a5667815194b7.scope - libcontainer container 9684b1c21604edbe4faaa08c2ea2808421a8494bcb79e704ee2a5667815194b7. Feb 13 15:57:53.735141 containerd[1756]: time="2025-02-13T15:57:53.735080704Z" level=info msg="StartContainer for \"9684b1c21604edbe4faaa08c2ea2808421a8494bcb79e704ee2a5667815194b7\" returns successfully" Feb 13 15:57:54.333842 kubelet[3351]: E0213 15:57:54.333775 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.333842 kubelet[3351]: W0213 15:57:54.333798 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.334337 kubelet[3351]: E0213 15:57:54.333821 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.334757 kubelet[3351]: E0213 15:57:54.334439 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.334757 kubelet[3351]: W0213 15:57:54.334452 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.334757 kubelet[3351]: E0213 15:57:54.334467 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.334757 kubelet[3351]: E0213 15:57:54.334619 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.334757 kubelet[3351]: W0213 15:57:54.334628 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.334757 kubelet[3351]: E0213 15:57:54.334639 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.335036 kubelet[3351]: E0213 15:57:54.334943 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.335036 kubelet[3351]: W0213 15:57:54.334954 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.335036 kubelet[3351]: E0213 15:57:54.334971 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.335394 kubelet[3351]: E0213 15:57:54.335368 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.335552 kubelet[3351]: W0213 15:57:54.335381 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.335552 kubelet[3351]: E0213 15:57:54.335478 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.335949 kubelet[3351]: E0213 15:57:54.335847 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.335949 kubelet[3351]: W0213 15:57:54.335859 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.335949 kubelet[3351]: E0213 15:57:54.335872 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.336313 kubelet[3351]: E0213 15:57:54.336253 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.336313 kubelet[3351]: W0213 15:57:54.336265 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.336313 kubelet[3351]: E0213 15:57:54.336278 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.337285 kubelet[3351]: E0213 15:57:54.337147 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.337285 kubelet[3351]: W0213 15:57:54.337160 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.337285 kubelet[3351]: E0213 15:57:54.337187 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.337764 kubelet[3351]: E0213 15:57:54.337633 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.337764 kubelet[3351]: W0213 15:57:54.337648 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.337764 kubelet[3351]: E0213 15:57:54.337661 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.338207 kubelet[3351]: E0213 15:57:54.338079 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.338207 kubelet[3351]: W0213 15:57:54.338090 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.338207 kubelet[3351]: E0213 15:57:54.338103 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.338525 kubelet[3351]: E0213 15:57:54.338463 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.338525 kubelet[3351]: W0213 15:57:54.338473 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.338525 kubelet[3351]: E0213 15:57:54.338485 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.338991 kubelet[3351]: E0213 15:57:54.338828 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.338991 kubelet[3351]: W0213 15:57:54.338839 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.338991 kubelet[3351]: E0213 15:57:54.338890 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.339498 kubelet[3351]: E0213 15:57:54.339290 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.339498 kubelet[3351]: W0213 15:57:54.339364 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.339498 kubelet[3351]: E0213 15:57:54.339377 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.339754 kubelet[3351]: E0213 15:57:54.339612 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.339754 kubelet[3351]: W0213 15:57:54.339621 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.339754 kubelet[3351]: E0213 15:57:54.339633 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.340044 kubelet[3351]: E0213 15:57:54.339940 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.340044 kubelet[3351]: W0213 15:57:54.339963 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.340044 kubelet[3351]: E0213 15:57:54.339978 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.343885 kubelet[3351]: E0213 15:57:54.343864 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.344139 kubelet[3351]: W0213 15:57:54.343993 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.344139 kubelet[3351]: E0213 15:57:54.344020 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.344759 kubelet[3351]: E0213 15:57:54.344733 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.344852 kubelet[3351]: W0213 15:57:54.344840 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.344938 kubelet[3351]: E0213 15:57:54.344928 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.345207 kubelet[3351]: E0213 15:57:54.345184 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.345207 kubelet[3351]: W0213 15:57:54.345206 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.345289 kubelet[3351]: E0213 15:57:54.345227 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.345503 kubelet[3351]: E0213 15:57:54.345483 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.345503 kubelet[3351]: W0213 15:57:54.345501 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.345577 kubelet[3351]: E0213 15:57:54.345525 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.345701 kubelet[3351]: E0213 15:57:54.345686 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.345701 kubelet[3351]: W0213 15:57:54.345699 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.345943 kubelet[3351]: E0213 15:57:54.345928 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.346157 kubelet[3351]: E0213 15:57:54.346139 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.346157 kubelet[3351]: W0213 15:57:54.346154 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.346718 kubelet[3351]: E0213 15:57:54.346207 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.346718 kubelet[3351]: E0213 15:57:54.346305 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.346718 kubelet[3351]: W0213 15:57:54.346320 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.346718 kubelet[3351]: E0213 15:57:54.346451 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.346718 kubelet[3351]: W0213 15:57:54.346459 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.346718 kubelet[3351]: E0213 15:57:54.346470 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.346718 kubelet[3351]: E0213 15:57:54.346532 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.346718 kubelet[3351]: E0213 15:57:54.346594 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.346718 kubelet[3351]: W0213 15:57:54.346601 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.346718 kubelet[3351]: E0213 15:57:54.346614 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.346978 kubelet[3351]: E0213 15:57:54.346772 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.346978 kubelet[3351]: W0213 15:57:54.346783 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.347222 kubelet[3351]: E0213 15:57:54.347194 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.347512 kubelet[3351]: E0213 15:57:54.347490 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.347512 kubelet[3351]: W0213 15:57:54.347507 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.347600 kubelet[3351]: E0213 15:57:54.347526 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.349017 kubelet[3351]: E0213 15:57:54.347685 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.349017 kubelet[3351]: W0213 15:57:54.347697 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.349017 kubelet[3351]: E0213 15:57:54.347716 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.349017 kubelet[3351]: E0213 15:57:54.347859 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.349017 kubelet[3351]: W0213 15:57:54.347867 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.349017 kubelet[3351]: E0213 15:57:54.347880 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.349017 kubelet[3351]: E0213 15:57:54.348015 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.349017 kubelet[3351]: W0213 15:57:54.348023 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.349017 kubelet[3351]: E0213 15:57:54.348039 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.349017 kubelet[3351]: E0213 15:57:54.348191 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.349290 kubelet[3351]: W0213 15:57:54.348198 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.349290 kubelet[3351]: E0213 15:57:54.348212 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.349290 kubelet[3351]: E0213 15:57:54.348582 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.349290 kubelet[3351]: W0213 15:57:54.348591 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.349290 kubelet[3351]: E0213 15:57:54.348678 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.349290 kubelet[3351]: E0213 15:57:54.348774 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.349290 kubelet[3351]: W0213 15:57:54.348781 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.349290 kubelet[3351]: E0213 15:57:54.348791 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:54.349290 kubelet[3351]: E0213 15:57:54.349043 3351 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:57:54.349290 kubelet[3351]: W0213 15:57:54.349051 3351 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:57:54.349559 kubelet[3351]: E0213 15:57:54.349062 3351 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:57:55.039911 containerd[1756]: time="2025-02-13T15:57:55.039845252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:55.042881 containerd[1756]: time="2025-02-13T15:57:55.042818018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:57:55.047948 containerd[1756]: time="2025-02-13T15:57:55.047897828Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:55.052490 containerd[1756]: time="2025-02-13T15:57:55.052414797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:55.053203 containerd[1756]: time="2025-02-13T15:57:55.053169358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.48780623s" Feb 13 15:57:55.053469 containerd[1756]: time="2025-02-13T15:57:55.053339879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:57:55.056568 containerd[1756]: time="2025-02-13T15:57:55.056516245Z" level=info msg="CreateContainer within sandbox \"2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:57:55.099522 containerd[1756]: time="2025-02-13T15:57:55.099398730Z" level=info msg="CreateContainer within sandbox \"2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7\"" Feb 13 15:57:55.101490 containerd[1756]: time="2025-02-13T15:57:55.101402054Z" level=info msg="StartContainer for \"b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7\"" Feb 13 15:57:55.135527 systemd[1]: Started cri-containerd-b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7.scope - libcontainer container b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7. Feb 13 15:57:55.154609 kubelet[3351]: E0213 15:57:55.154555 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:57:55.171597 containerd[1756]: time="2025-02-13T15:57:55.171549553Z" level=info msg="StartContainer for \"b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7\" returns successfully" Feb 13 15:57:55.182477 systemd[1]: cri-containerd-b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7.scope: Deactivated successfully. Feb 13 15:57:55.250708 kubelet[3351]: I0213 15:57:55.250288 3351 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:57:55.269964 kubelet[3351]: I0213 15:57:55.269408 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-57f479dc7f-m9lcf" podStartSLOduration=3.1387324420000002 podStartE2EDuration="5.269362947s" podCreationTimestamp="2025-02-13 15:57:50 +0000 UTC" firstStartedPulling="2025-02-13 15:57:51.432950099 +0000 UTC m=+23.418240681" lastFinishedPulling="2025-02-13 15:57:53.563580604 +0000 UTC m=+25.548871186" observedRunningTime="2025-02-13 15:57:54.261989829 +0000 UTC m=+26.247280411" watchObservedRunningTime="2025-02-13 15:57:55.269362947 +0000 UTC m=+27.254653529" Feb 13 15:57:55.568825 systemd[1]: run-containerd-runc-k8s.io-b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7-runc.687RAE.mount: Deactivated successfully. Feb 13 15:57:55.569169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7-rootfs.mount: Deactivated successfully. Feb 13 15:57:56.092835 containerd[1756]: time="2025-02-13T15:57:56.092748700Z" level=info msg="shim disconnected" id=b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7 namespace=k8s.io Feb 13 15:57:56.092835 containerd[1756]: time="2025-02-13T15:57:56.092828500Z" level=warning msg="cleaning up after shim disconnected" id=b50eb2cd2c6a0272d7cb5ca7c64d2c81a9826b958391e34ea2d00e7f4bf600c7 namespace=k8s.io Feb 13 15:57:56.093215 containerd[1756]: time="2025-02-13T15:57:56.092836980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:57:56.254953 containerd[1756]: time="2025-02-13T15:57:56.254888341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:57:57.155405 kubelet[3351]: E0213 15:57:57.155314 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:57:59.155046 kubelet[3351]: E0213 15:57:59.154913 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:57:59.329077 containerd[1756]: time="2025-02-13T15:57:59.328322162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:59.330743 containerd[1756]: time="2025-02-13T15:57:59.330690967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:57:59.333502 containerd[1756]: time="2025-02-13T15:57:59.333452132Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:59.338632 containerd[1756]: time="2025-02-13T15:57:59.338524903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:59.339721 containerd[1756]: time="2025-02-13T15:57:59.339146224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.084220523s" Feb 13 15:57:59.339721 containerd[1756]: time="2025-02-13T15:57:59.339179184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:57:59.342014 containerd[1756]: time="2025-02-13T15:57:59.341951509Z" level=info msg="CreateContainer within sandbox \"2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:57:59.390054 containerd[1756]: time="2025-02-13T15:57:59.389998565Z" level=info msg="CreateContainer within sandbox \"2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415\"" Feb 13 15:57:59.391461 containerd[1756]: time="2025-02-13T15:57:59.391363368Z" level=info msg="StartContainer for \"8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415\"" Feb 13 15:57:59.420514 systemd[1]: Started cri-containerd-8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415.scope - libcontainer container 8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415. Feb 13 15:57:59.450233 containerd[1756]: time="2025-02-13T15:57:59.450173165Z" level=info msg="StartContainer for \"8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415\" returns successfully" Feb 13 15:58:00.569429 systemd[1]: cri-containerd-8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415.scope: Deactivated successfully. Feb 13 15:58:00.590646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415-rootfs.mount: Deactivated successfully. Feb 13 15:58:00.602515 kubelet[3351]: I0213 15:58:00.602479 3351 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:58:00.906971 kubelet[3351]: I0213 15:58:00.639160 3351 topology_manager.go:215] "Topology Admit Handler" podUID="7dd69579-7ca4-4802-a6f8-37ab66ddcef1" podNamespace="kube-system" podName="coredns-76f75df574-xz2g6" Feb 13 15:58:00.906971 kubelet[3351]: I0213 15:58:00.649894 3351 topology_manager.go:215] "Topology Admit Handler" podUID="a37d0509-8180-495d-aac5-1e394b4d33c7" podNamespace="kube-system" podName="coredns-76f75df574-j7w87" Feb 13 15:58:00.906971 kubelet[3351]: I0213 15:58:00.657091 3351 topology_manager.go:215] "Topology Admit Handler" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" podNamespace="calico-apiserver" podName="calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:00.906971 kubelet[3351]: I0213 15:58:00.657328 3351 topology_manager.go:215] "Topology Admit Handler" podUID="17906d2c-fcbc-4df1-8ac2-176024c123e0" podNamespace="calico-system" podName="calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:00.906971 kubelet[3351]: I0213 15:58:00.657455 3351 topology_manager.go:215] "Topology Admit Handler" podUID="4713ae5a-4a4f-4494-8aa7-cdc51f64b486" podNamespace="calico-apiserver" podName="calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:00.906971 kubelet[3351]: I0213 15:58:00.693474 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27vq\" (UniqueName: \"kubernetes.io/projected/17906d2c-fcbc-4df1-8ac2-176024c123e0-kube-api-access-l27vq\") pod \"calico-kube-controllers-684dd8d987-7vztn\" (UID: \"17906d2c-fcbc-4df1-8ac2-176024c123e0\") " pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:00.906971 kubelet[3351]: I0213 15:58:00.693515 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17906d2c-fcbc-4df1-8ac2-176024c123e0-tigera-ca-bundle\") pod \"calico-kube-controllers-684dd8d987-7vztn\" (UID: \"17906d2c-fcbc-4df1-8ac2-176024c123e0\") " pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:00.647946 systemd[1]: Created slice kubepods-burstable-pod7dd69579_7ca4_4802_a6f8_37ab66ddcef1.slice - libcontainer container kubepods-burstable-pod7dd69579_7ca4_4802_a6f8_37ab66ddcef1.slice. Feb 13 15:58:00.907265 kubelet[3351]: I0213 15:58:00.693594 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/38b25543-f3d8-4325-8684-f120f4c5229a-calico-apiserver-certs\") pod \"calico-apiserver-5885744d75-ph4x5\" (UID: \"38b25543-f3d8-4325-8684-f120f4c5229a\") " pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:00.907265 kubelet[3351]: I0213 15:58:00.693616 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b29fz\" (UniqueName: \"kubernetes.io/projected/38b25543-f3d8-4325-8684-f120f4c5229a-kube-api-access-b29fz\") pod \"calico-apiserver-5885744d75-ph4x5\" (UID: \"38b25543-f3d8-4325-8684-f120f4c5229a\") " pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:00.907265 kubelet[3351]: I0213 15:58:00.693648 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dd69579-7ca4-4802-a6f8-37ab66ddcef1-config-volume\") pod \"coredns-76f75df574-xz2g6\" (UID: \"7dd69579-7ca4-4802-a6f8-37ab66ddcef1\") " pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:00.907265 kubelet[3351]: I0213 15:58:00.693684 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwk5\" (UniqueName: \"kubernetes.io/projected/7dd69579-7ca4-4802-a6f8-37ab66ddcef1-kube-api-access-8xwk5\") pod \"coredns-76f75df574-xz2g6\" (UID: \"7dd69579-7ca4-4802-a6f8-37ab66ddcef1\") " pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:00.907265 kubelet[3351]: I0213 15:58:00.693736 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwgc7\" (UniqueName: \"kubernetes.io/projected/4713ae5a-4a4f-4494-8aa7-cdc51f64b486-kube-api-access-rwgc7\") pod \"calico-apiserver-5885744d75-9dqxt\" (UID: \"4713ae5a-4a4f-4494-8aa7-cdc51f64b486\") " pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:00.667234 systemd[1]: Created slice kubepods-burstable-poda37d0509_8180_495d_aac5_1e394b4d33c7.slice - libcontainer container kubepods-burstable-poda37d0509_8180_495d_aac5_1e394b4d33c7.slice. Feb 13 15:58:00.907459 kubelet[3351]: I0213 15:58:00.694430 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a37d0509-8180-495d-aac5-1e394b4d33c7-config-volume\") pod \"coredns-76f75df574-j7w87\" (UID: \"a37d0509-8180-495d-aac5-1e394b4d33c7\") " pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:00.907459 kubelet[3351]: I0213 15:58:00.694523 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht664\" (UniqueName: \"kubernetes.io/projected/a37d0509-8180-495d-aac5-1e394b4d33c7-kube-api-access-ht664\") pod \"coredns-76f75df574-j7w87\" (UID: \"a37d0509-8180-495d-aac5-1e394b4d33c7\") " pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:00.907459 kubelet[3351]: I0213 15:58:00.694563 3351 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4713ae5a-4a4f-4494-8aa7-cdc51f64b486-calico-apiserver-certs\") pod \"calico-apiserver-5885744d75-9dqxt\" (UID: \"4713ae5a-4a4f-4494-8aa7-cdc51f64b486\") " pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:00.679675 systemd[1]: Created slice kubepods-besteffort-pod4713ae5a_4a4f_4494_8aa7_cdc51f64b486.slice - libcontainer container kubepods-besteffort-pod4713ae5a_4a4f_4494_8aa7_cdc51f64b486.slice. Feb 13 15:58:00.690370 systemd[1]: Created slice kubepods-besteffort-pod38b25543_f3d8_4325_8684_f120f4c5229a.slice - libcontainer container kubepods-besteffort-pod38b25543_f3d8_4325_8684_f120f4c5229a.slice. Feb 13 15:58:00.700977 systemd[1]: Created slice kubepods-besteffort-pod17906d2c_fcbc_4df1_8ac2_176024c123e0.slice - libcontainer container kubepods-besteffort-pod17906d2c_fcbc_4df1_8ac2_176024c123e0.slice. Feb 13 15:58:01.161970 systemd[1]: Created slice kubepods-besteffort-pod2d8778e0_23a8_47a6_b01b_5b701fc009d0.slice - libcontainer container kubepods-besteffort-pod2d8778e0_23a8_47a6_b01b_5b701fc009d0.slice. Feb 13 15:58:01.165219 containerd[1756]: time="2025-02-13T15:58:01.165150134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:0,}" Feb 13 15:58:01.663517 containerd[1756]: time="2025-02-13T15:58:01.663417605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:0,}" Feb 13 15:58:01.686039 containerd[1756]: time="2025-02-13T15:58:01.685639289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:58:01.686039 containerd[1756]: time="2025-02-13T15:58:01.685733009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:0,}" Feb 13 15:58:01.686039 containerd[1756]: time="2025-02-13T15:58:01.685908690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:0,}" Feb 13 15:58:01.686039 containerd[1756]: time="2025-02-13T15:58:01.685916930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:58:01.756796 containerd[1756]: time="2025-02-13T15:58:01.756728831Z" level=info msg="shim disconnected" id=8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415 namespace=k8s.io Feb 13 15:58:01.756796 containerd[1756]: time="2025-02-13T15:58:01.756788551Z" level=warning msg="cleaning up after shim disconnected" id=8e737036a7088aa7c754d4b697ec28a5d0edd78b9a5d78e19854cb1f202b0415 namespace=k8s.io Feb 13 15:58:01.756796 containerd[1756]: time="2025-02-13T15:58:01.756798071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:58:02.015509 containerd[1756]: time="2025-02-13T15:58:02.015367785Z" level=error msg="Failed to destroy network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.016326 containerd[1756]: time="2025-02-13T15:58:02.015887906Z" level=error msg="encountered an error cleaning up failed sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.016326 containerd[1756]: time="2025-02-13T15:58:02.015962466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.016481 kubelet[3351]: E0213 15:58:02.016198 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.016481 kubelet[3351]: E0213 15:58:02.016255 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:02.016481 kubelet[3351]: E0213 15:58:02.016276 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:02.017401 kubelet[3351]: E0213 15:58:02.017154 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:58:02.072896 containerd[1756]: time="2025-02-13T15:58:02.072785099Z" level=error msg="Failed to destroy network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.075039 containerd[1756]: time="2025-02-13T15:58:02.074976303Z" level=error msg="encountered an error cleaning up failed sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.075223 containerd[1756]: time="2025-02-13T15:58:02.075121144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.075655 kubelet[3351]: E0213 15:58:02.075414 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.075655 kubelet[3351]: E0213 15:58:02.075465 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:02.075655 kubelet[3351]: E0213 15:58:02.075489 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:02.075766 kubelet[3351]: E0213 15:58:02.075548 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j7w87" podUID="a37d0509-8180-495d-aac5-1e394b4d33c7" Feb 13 15:58:02.085260 containerd[1756]: time="2025-02-13T15:58:02.085108563Z" level=error msg="Failed to destroy network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.085900 containerd[1756]: time="2025-02-13T15:58:02.085653405Z" level=error msg="encountered an error cleaning up failed sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.085900 containerd[1756]: time="2025-02-13T15:58:02.085716205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.086843 kubelet[3351]: E0213 15:58:02.086172 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.086843 kubelet[3351]: E0213 15:58:02.086231 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:02.086843 kubelet[3351]: E0213 15:58:02.086251 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:02.086998 kubelet[3351]: E0213 15:58:02.086327 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xz2g6" podUID="7dd69579-7ca4-4802-a6f8-37ab66ddcef1" Feb 13 15:58:02.091521 containerd[1756]: time="2025-02-13T15:58:02.091469456Z" level=error msg="Failed to destroy network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.092026 containerd[1756]: time="2025-02-13T15:58:02.092000897Z" level=error msg="encountered an error cleaning up failed sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.092152 containerd[1756]: time="2025-02-13T15:58:02.092130777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.092689 kubelet[3351]: E0213 15:58:02.092511 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.092689 kubelet[3351]: E0213 15:58:02.092564 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:02.092689 kubelet[3351]: E0213 15:58:02.092592 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:02.092818 kubelet[3351]: E0213 15:58:02.092654 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" podUID="4713ae5a-4a4f-4494-8aa7-cdc51f64b486" Feb 13 15:58:02.101011 containerd[1756]: time="2025-02-13T15:58:02.100227953Z" level=error msg="Failed to destroy network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.101011 containerd[1756]: time="2025-02-13T15:58:02.100650194Z" level=error msg="encountered an error cleaning up failed sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.101011 containerd[1756]: time="2025-02-13T15:58:02.100725714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.101280 kubelet[3351]: E0213 15:58:02.101243 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.101356 kubelet[3351]: E0213 15:58:02.101339 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:02.101398 kubelet[3351]: E0213 15:58:02.101367 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:02.101464 kubelet[3351]: E0213 15:58:02.101446 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" podUID="17906d2c-fcbc-4df1-8ac2-176024c123e0" Feb 13 15:58:02.108096 containerd[1756]: time="2025-02-13T15:58:02.108042929Z" level=error msg="Failed to destroy network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.108453 containerd[1756]: time="2025-02-13T15:58:02.108423850Z" level=error msg="encountered an error cleaning up failed sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.108533 containerd[1756]: time="2025-02-13T15:58:02.108500010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.108787 kubelet[3351]: E0213 15:58:02.108759 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.108852 kubelet[3351]: E0213 15:58:02.108819 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:02.108852 kubelet[3351]: E0213 15:58:02.108840 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:02.108900 kubelet[3351]: E0213 15:58:02.108889 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" Feb 13 15:58:02.271033 kubelet[3351]: I0213 15:58:02.270346 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c" Feb 13 15:58:02.271757 containerd[1756]: time="2025-02-13T15:58:02.271343014Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:02.271757 containerd[1756]: time="2025-02-13T15:58:02.271568374Z" level=info msg="Ensure that sandbox 9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c in task-service has been cleanup successfully" Feb 13 15:58:02.272275 containerd[1756]: time="2025-02-13T15:58:02.272130975Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:02.272275 containerd[1756]: time="2025-02-13T15:58:02.272154415Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:02.273203 containerd[1756]: time="2025-02-13T15:58:02.273177697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:58:02.275277 kubelet[3351]: I0213 15:58:02.273586 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec" Feb 13 15:58:02.275413 containerd[1756]: time="2025-02-13T15:58:02.274072539Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:02.275413 containerd[1756]: time="2025-02-13T15:58:02.274368180Z" level=info msg="Ensure that sandbox afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec in task-service has been cleanup successfully" Feb 13 15:58:02.275413 containerd[1756]: time="2025-02-13T15:58:02.274584700Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:02.275413 containerd[1756]: time="2025-02-13T15:58:02.274600220Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:02.277342 kubelet[3351]: I0213 15:58:02.277309 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192" Feb 13 15:58:02.277706 containerd[1756]: time="2025-02-13T15:58:02.277673386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:58:02.278120 containerd[1756]: time="2025-02-13T15:58:02.277807187Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:02.278120 containerd[1756]: time="2025-02-13T15:58:02.278015747Z" level=info msg="Ensure that sandbox 8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192 in task-service has been cleanup successfully" Feb 13 15:58:02.278455 containerd[1756]: time="2025-02-13T15:58:02.278405028Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:02.278455 containerd[1756]: time="2025-02-13T15:58:02.278434668Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:02.279070 containerd[1756]: time="2025-02-13T15:58:02.279044469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:1,}" Feb 13 15:58:02.279606 kubelet[3351]: I0213 15:58:02.279580 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27" Feb 13 15:58:02.280599 containerd[1756]: time="2025-02-13T15:58:02.280459352Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:02.281080 containerd[1756]: time="2025-02-13T15:58:02.280923073Z" level=info msg="Ensure that sandbox 755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27 in task-service has been cleanup successfully" Feb 13 15:58:02.281140 kubelet[3351]: I0213 15:58:02.281004 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52" Feb 13 15:58:02.281440 containerd[1756]: time="2025-02-13T15:58:02.281420234Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:02.281585 containerd[1756]: time="2025-02-13T15:58:02.281508394Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:02.281850 containerd[1756]: time="2025-02-13T15:58:02.281788914Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:02.282255 containerd[1756]: time="2025-02-13T15:58:02.282215235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:1,}" Feb 13 15:58:02.282543 containerd[1756]: time="2025-02-13T15:58:02.282400396Z" level=info msg="Ensure that sandbox fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52 in task-service has been cleanup successfully" Feb 13 15:58:02.282667 containerd[1756]: time="2025-02-13T15:58:02.282643756Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:02.282724 containerd[1756]: time="2025-02-13T15:58:02.282711436Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:02.283186 containerd[1756]: time="2025-02-13T15:58:02.283161797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:1,}" Feb 13 15:58:02.283542 kubelet[3351]: I0213 15:58:02.283515 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788" Feb 13 15:58:02.285045 containerd[1756]: time="2025-02-13T15:58:02.284759280Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:02.285045 containerd[1756]: time="2025-02-13T15:58:02.284925521Z" level=info msg="Ensure that sandbox bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788 in task-service has been cleanup successfully" Feb 13 15:58:02.285238 containerd[1756]: time="2025-02-13T15:58:02.285207761Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:02.285805 containerd[1756]: time="2025-02-13T15:58:02.285666242Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:02.289153 containerd[1756]: time="2025-02-13T15:58:02.288940049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:1,}" Feb 13 15:58:02.290160 containerd[1756]: time="2025-02-13T15:58:02.289355450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:58:02.549408 containerd[1756]: time="2025-02-13T15:58:02.548971086Z" level=error msg="Failed to destroy network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.551390 containerd[1756]: time="2025-02-13T15:58:02.551331730Z" level=error msg="encountered an error cleaning up failed sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.551853 containerd[1756]: time="2025-02-13T15:58:02.551605971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.552430 kubelet[3351]: E0213 15:58:02.552234 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.552430 kubelet[3351]: E0213 15:58:02.552290 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:02.553482 kubelet[3351]: E0213 15:58:02.553018 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:02.553482 kubelet[3351]: E0213 15:58:02.553138 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" podUID="4713ae5a-4a4f-4494-8aa7-cdc51f64b486" Feb 13 15:58:02.606179 containerd[1756]: time="2025-02-13T15:58:02.606127079Z" level=error msg="Failed to destroy network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.606699 containerd[1756]: time="2025-02-13T15:58:02.606504480Z" level=error msg="encountered an error cleaning up failed sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.606699 containerd[1756]: time="2025-02-13T15:58:02.606564920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.607282 kubelet[3351]: E0213 15:58:02.607085 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.607282 kubelet[3351]: E0213 15:58:02.607144 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:02.607282 kubelet[3351]: E0213 15:58:02.607165 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:02.610231 kubelet[3351]: E0213 15:58:02.607218 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" Feb 13 15:58:02.642167 containerd[1756]: time="2025-02-13T15:58:02.642032191Z" level=error msg="Failed to destroy network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.646349 containerd[1756]: time="2025-02-13T15:58:02.645686358Z" level=error msg="Failed to destroy network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.647677 containerd[1756]: time="2025-02-13T15:58:02.647624642Z" level=error msg="encountered an error cleaning up failed sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.647764 containerd[1756]: time="2025-02-13T15:58:02.647717562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.649032 containerd[1756]: time="2025-02-13T15:58:02.647938122Z" level=error msg="encountered an error cleaning up failed sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.649032 containerd[1756]: time="2025-02-13T15:58:02.647973323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.649133 kubelet[3351]: E0213 15:58:02.648483 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.649133 kubelet[3351]: E0213 15:58:02.648497 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.649133 kubelet[3351]: E0213 15:58:02.648534 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:02.649133 kubelet[3351]: E0213 15:58:02.648541 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:02.649245 kubelet[3351]: E0213 15:58:02.648554 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:02.649245 kubelet[3351]: E0213 15:58:02.648559 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:02.649245 kubelet[3351]: E0213 15:58:02.648605 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xz2g6" podUID="7dd69579-7ca4-4802-a6f8-37ab66ddcef1" Feb 13 15:58:02.649381 kubelet[3351]: E0213 15:58:02.648608 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j7w87" podUID="a37d0509-8180-495d-aac5-1e394b4d33c7" Feb 13 15:58:02.653996 containerd[1756]: time="2025-02-13T15:58:02.653800654Z" level=error msg="Failed to destroy network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.658423 containerd[1756]: time="2025-02-13T15:58:02.655387577Z" level=error msg="encountered an error cleaning up failed sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.658566 containerd[1756]: time="2025-02-13T15:58:02.658462263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.659007 kubelet[3351]: E0213 15:58:02.658969 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.659338 kubelet[3351]: E0213 15:58:02.659057 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:02.659338 kubelet[3351]: E0213 15:58:02.659079 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:02.659338 kubelet[3351]: E0213 15:58:02.659138 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" podUID="17906d2c-fcbc-4df1-8ac2-176024c123e0" Feb 13 15:58:02.662266 containerd[1756]: time="2025-02-13T15:58:02.662208631Z" level=error msg="Failed to destroy network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.662585 containerd[1756]: time="2025-02-13T15:58:02.662552872Z" level=error msg="encountered an error cleaning up failed sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.662636 containerd[1756]: time="2025-02-13T15:58:02.662623672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.662929 kubelet[3351]: E0213 15:58:02.662893 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:02.662984 kubelet[3351]: E0213 15:58:02.662954 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:02.662984 kubelet[3351]: E0213 15:58:02.662973 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:02.663086 kubelet[3351]: E0213 15:58:02.663030 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:58:02.862083 systemd[1]: run-netns-cni\x2d9faf7ee9\x2dd58a\x2d45f8\x2d70f2\x2df77089fbc2b6.mount: Deactivated successfully. Feb 13 15:58:02.862175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec-shm.mount: Deactivated successfully. Feb 13 15:58:02.862227 systemd[1]: run-netns-cni\x2dc2df745b\x2d55c5\x2dc2c9\x2d89af\x2da7650f5bb61a.mount: Deactivated successfully. Feb 13 15:58:02.862319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27-shm.mount: Deactivated successfully. Feb 13 15:58:02.862371 systemd[1]: run-netns-cni\x2d29117756\x2d008e\x2d140b\x2d5d17\x2d80c5c409a5be.mount: Deactivated successfully. Feb 13 15:58:02.862416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788-shm.mount: Deactivated successfully. Feb 13 15:58:03.250792 kubelet[3351]: I0213 15:58:03.250435 3351 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:58:03.292005 kubelet[3351]: I0213 15:58:03.291955 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5" Feb 13 15:58:03.294157 containerd[1756]: time="2025-02-13T15:58:03.294087407Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:03.294944 containerd[1756]: time="2025-02-13T15:58:03.294285408Z" level=info msg="Ensure that sandbox ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5 in task-service has been cleanup successfully" Feb 13 15:58:03.296105 containerd[1756]: time="2025-02-13T15:58:03.295958051Z" level=info msg="TearDown network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" successfully" Feb 13 15:58:03.296105 containerd[1756]: time="2025-02-13T15:58:03.295995371Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" returns successfully" Feb 13 15:58:03.297390 containerd[1756]: time="2025-02-13T15:58:03.296758612Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:03.297390 containerd[1756]: time="2025-02-13T15:58:03.296868013Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:03.297390 containerd[1756]: time="2025-02-13T15:58:03.296878613Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:03.297622 containerd[1756]: time="2025-02-13T15:58:03.297579774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:2,}" Feb 13 15:58:03.300112 kubelet[3351]: I0213 15:58:03.299145 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09" Feb 13 15:58:03.299729 systemd[1]: run-netns-cni\x2d6f044828\x2d0250\x2d5035\x2df0ab\x2de5689a618d6b.mount: Deactivated successfully. Feb 13 15:58:03.301392 containerd[1756]: time="2025-02-13T15:58:03.301114501Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:03.302319 containerd[1756]: time="2025-02-13T15:58:03.301621102Z" level=info msg="Ensure that sandbox 208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09 in task-service has been cleanup successfully" Feb 13 15:58:03.302519 containerd[1756]: time="2025-02-13T15:58:03.302494064Z" level=info msg="TearDown network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" successfully" Feb 13 15:58:03.302588 containerd[1756]: time="2025-02-13T15:58:03.302575664Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" returns successfully" Feb 13 15:58:03.304767 containerd[1756]: time="2025-02-13T15:58:03.304721348Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:03.305165 systemd[1]: run-netns-cni\x2dfdf29a31\x2d72e5\x2d5afa\x2d300e\x2d6a411bdeb710.mount: Deactivated successfully. Feb 13 15:58:03.306897 containerd[1756]: time="2025-02-13T15:58:03.305489430Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:03.306897 containerd[1756]: time="2025-02-13T15:58:03.305596950Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:03.307488 containerd[1756]: time="2025-02-13T15:58:03.307163953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:2,}" Feb 13 15:58:03.308276 kubelet[3351]: I0213 15:58:03.308182 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67" Feb 13 15:58:03.313166 containerd[1756]: time="2025-02-13T15:58:03.313113845Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:03.313561 containerd[1756]: time="2025-02-13T15:58:03.313524646Z" level=info msg="Ensure that sandbox 477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67 in task-service has been cleanup successfully" Feb 13 15:58:03.314498 containerd[1756]: time="2025-02-13T15:58:03.314457848Z" level=info msg="TearDown network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" successfully" Feb 13 15:58:03.315163 containerd[1756]: time="2025-02-13T15:58:03.314906169Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" returns successfully" Feb 13 15:58:03.317200 containerd[1756]: time="2025-02-13T15:58:03.315835450Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:03.317200 containerd[1756]: time="2025-02-13T15:58:03.315932971Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:03.317200 containerd[1756]: time="2025-02-13T15:58:03.315942931Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:03.316466 systemd[1]: run-netns-cni\x2de21ecde0\x2d502e\x2d3045\x2d23b1\x2dd9ff1f2e1b3d.mount: Deactivated successfully. Feb 13 15:58:03.318327 kubelet[3351]: I0213 15:58:03.317897 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2" Feb 13 15:58:03.319703 containerd[1756]: time="2025-02-13T15:58:03.319657098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:2,}" Feb 13 15:58:03.320576 containerd[1756]: time="2025-02-13T15:58:03.320139059Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:03.320576 containerd[1756]: time="2025-02-13T15:58:03.320419700Z" level=info msg="Ensure that sandbox a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2 in task-service has been cleanup successfully" Feb 13 15:58:03.320838 containerd[1756]: time="2025-02-13T15:58:03.320805260Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:03.321048 containerd[1756]: time="2025-02-13T15:58:03.321026541Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:03.324314 containerd[1756]: time="2025-02-13T15:58:03.321777422Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:03.323846 systemd[1]: run-netns-cni\x2d04d57aed\x2d065c\x2d75d9\x2d5da6\x2de6c7f6c28a28.mount: Deactivated successfully. Feb 13 15:58:03.326825 containerd[1756]: time="2025-02-13T15:58:03.326701432Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:03.326825 containerd[1756]: time="2025-02-13T15:58:03.326732792Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:03.327823 containerd[1756]: time="2025-02-13T15:58:03.327500034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:58:03.328737 kubelet[3351]: I0213 15:58:03.328703 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01" Feb 13 15:58:03.329959 containerd[1756]: time="2025-02-13T15:58:03.329492918Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:03.330356 containerd[1756]: time="2025-02-13T15:58:03.330235439Z" level=info msg="Ensure that sandbox 3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01 in task-service has been cleanup successfully" Feb 13 15:58:03.330826 containerd[1756]: time="2025-02-13T15:58:03.330762600Z" level=info msg="TearDown network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" successfully" Feb 13 15:58:03.330826 containerd[1756]: time="2025-02-13T15:58:03.330785120Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" returns successfully" Feb 13 15:58:03.332703 containerd[1756]: time="2025-02-13T15:58:03.332199363Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:03.332703 containerd[1756]: time="2025-02-13T15:58:03.332282763Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:03.332703 containerd[1756]: time="2025-02-13T15:58:03.332316563Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:03.333327 containerd[1756]: time="2025-02-13T15:58:03.333285325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:58:03.334363 kubelet[3351]: I0213 15:58:03.333796 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb" Feb 13 15:58:03.335155 containerd[1756]: time="2025-02-13T15:58:03.335098369Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:03.335723 containerd[1756]: time="2025-02-13T15:58:03.335589530Z" level=info msg="Ensure that sandbox 280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb in task-service has been cleanup successfully" Feb 13 15:58:03.336386 containerd[1756]: time="2025-02-13T15:58:03.336073971Z" level=info msg="TearDown network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" successfully" Feb 13 15:58:03.336386 containerd[1756]: time="2025-02-13T15:58:03.336101331Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" returns successfully" Feb 13 15:58:03.336655 containerd[1756]: time="2025-02-13T15:58:03.336619572Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:03.336742 containerd[1756]: time="2025-02-13T15:58:03.336722052Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:03.336742 containerd[1756]: time="2025-02-13T15:58:03.336737532Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:03.337239 containerd[1756]: time="2025-02-13T15:58:03.337202853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:2,}" Feb 13 15:58:03.472691 containerd[1756]: time="2025-02-13T15:58:03.472602202Z" level=error msg="Failed to destroy network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.473790 containerd[1756]: time="2025-02-13T15:58:03.473612244Z" level=error msg="encountered an error cleaning up failed sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.473790 containerd[1756]: time="2025-02-13T15:58:03.473687684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.474701 kubelet[3351]: E0213 15:58:03.474126 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.474701 kubelet[3351]: E0213 15:58:03.474182 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:03.474701 kubelet[3351]: E0213 15:58:03.474203 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:03.474813 kubelet[3351]: E0213 15:58:03.474253 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j7w87" podUID="a37d0509-8180-495d-aac5-1e394b4d33c7" Feb 13 15:58:03.513784 containerd[1756]: time="2025-02-13T15:58:03.513375563Z" level=error msg="Failed to destroy network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.516744 containerd[1756]: time="2025-02-13T15:58:03.516695410Z" level=error msg="encountered an error cleaning up failed sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.517664 containerd[1756]: time="2025-02-13T15:58:03.517548451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.518036 kubelet[3351]: E0213 15:58:03.517912 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.518036 kubelet[3351]: E0213 15:58:03.517976 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:03.518036 kubelet[3351]: E0213 15:58:03.518010 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:03.518679 kubelet[3351]: E0213 15:58:03.518267 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" podUID="17906d2c-fcbc-4df1-8ac2-176024c123e0" Feb 13 15:58:03.613158 containerd[1756]: time="2025-02-13T15:58:03.613091361Z" level=error msg="Failed to destroy network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.614347 containerd[1756]: time="2025-02-13T15:58:03.613886283Z" level=error msg="encountered an error cleaning up failed sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.615599 containerd[1756]: time="2025-02-13T15:58:03.615499126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.617164 kubelet[3351]: E0213 15:58:03.616795 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.617164 kubelet[3351]: E0213 15:58:03.616855 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:03.617164 kubelet[3351]: E0213 15:58:03.616877 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:03.617992 kubelet[3351]: E0213 15:58:03.616939 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:58:03.634096 containerd[1756]: time="2025-02-13T15:58:03.634035763Z" level=error msg="Failed to destroy network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.635288 containerd[1756]: time="2025-02-13T15:58:03.635016725Z" level=error msg="encountered an error cleaning up failed sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.635288 containerd[1756]: time="2025-02-13T15:58:03.635113525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.635698 kubelet[3351]: E0213 15:58:03.635673 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.635845 kubelet[3351]: E0213 15:58:03.635833 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:03.635942 kubelet[3351]: E0213 15:58:03.635931 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:03.636081 kubelet[3351]: E0213 15:58:03.636068 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" Feb 13 15:58:03.662941 containerd[1756]: time="2025-02-13T15:58:03.662897140Z" level=error msg="Failed to destroy network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.663498 containerd[1756]: time="2025-02-13T15:58:03.663466262Z" level=error msg="encountered an error cleaning up failed sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.663631 containerd[1756]: time="2025-02-13T15:58:03.663611982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.664316 kubelet[3351]: E0213 15:58:03.663911 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.664316 kubelet[3351]: E0213 15:58:03.663964 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:03.664316 kubelet[3351]: E0213 15:58:03.663985 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:03.664465 kubelet[3351]: E0213 15:58:03.664046 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xz2g6" podUID="7dd69579-7ca4-4802-a6f8-37ab66ddcef1" Feb 13 15:58:03.667257 containerd[1756]: time="2025-02-13T15:58:03.667170669Z" level=error msg="Failed to destroy network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.667637 containerd[1756]: time="2025-02-13T15:58:03.667602230Z" level=error msg="encountered an error cleaning up failed sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.667696 containerd[1756]: time="2025-02-13T15:58:03.667676110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.668470 kubelet[3351]: E0213 15:58:03.667926 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:03.668470 kubelet[3351]: E0213 15:58:03.667982 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:03.668470 kubelet[3351]: E0213 15:58:03.668004 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:03.668600 kubelet[3351]: E0213 15:58:03.668084 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" podUID="4713ae5a-4a4f-4494-8aa7-cdc51f64b486" Feb 13 15:58:03.862062 systemd[1]: run-netns-cni\x2dd9711adc\x2d8ea7\x2db80a\x2dbac2\x2d81e04ce212ec.mount: Deactivated successfully. Feb 13 15:58:03.862166 systemd[1]: run-netns-cni\x2df230eb30\x2d5a2f\x2d4b13\x2d93bc\x2d9a2a37d3139f.mount: Deactivated successfully. Feb 13 15:58:04.339087 kubelet[3351]: I0213 15:58:04.339002 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc" Feb 13 15:58:04.340831 containerd[1756]: time="2025-02-13T15:58:04.340548568Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" Feb 13 15:58:04.340831 containerd[1756]: time="2025-02-13T15:58:04.340733648Z" level=info msg="Ensure that sandbox 8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc in task-service has been cleanup successfully" Feb 13 15:58:04.341833 containerd[1756]: time="2025-02-13T15:58:04.341751210Z" level=info msg="TearDown network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" successfully" Feb 13 15:58:04.341833 containerd[1756]: time="2025-02-13T15:58:04.341775330Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" returns successfully" Feb 13 15:58:04.342995 containerd[1756]: time="2025-02-13T15:58:04.342649052Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:04.342995 containerd[1756]: time="2025-02-13T15:58:04.342735412Z" level=info msg="TearDown network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" successfully" Feb 13 15:58:04.342995 containerd[1756]: time="2025-02-13T15:58:04.342746372Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" returns successfully" Feb 13 15:58:04.343932 kubelet[3351]: I0213 15:58:04.343280 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005" Feb 13 15:58:04.344007 containerd[1756]: time="2025-02-13T15:58:04.343874894Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:04.344007 containerd[1756]: time="2025-02-13T15:58:04.343963175Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:04.344007 containerd[1756]: time="2025-02-13T15:58:04.343974495Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:04.344095 containerd[1756]: time="2025-02-13T15:58:04.343874854Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" Feb 13 15:58:04.344167 containerd[1756]: time="2025-02-13T15:58:04.344136175Z" level=info msg="Ensure that sandbox 7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005 in task-service has been cleanup successfully" Feb 13 15:58:04.344318 systemd[1]: run-netns-cni\x2dd6e39542\x2d4238\x2d980d\x2df048\x2d7a648f03583a.mount: Deactivated successfully. Feb 13 15:58:04.345430 containerd[1756]: time="2025-02-13T15:58:04.345068457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:3,}" Feb 13 15:58:04.346598 containerd[1756]: time="2025-02-13T15:58:04.346381859Z" level=info msg="TearDown network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" successfully" Feb 13 15:58:04.346741 containerd[1756]: time="2025-02-13T15:58:04.346719100Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" returns successfully" Feb 13 15:58:04.347267 kubelet[3351]: I0213 15:58:04.347240 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2" Feb 13 15:58:04.347991 containerd[1756]: time="2025-02-13T15:58:04.347548102Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:04.347991 containerd[1756]: time="2025-02-13T15:58:04.347659142Z" level=info msg="TearDown network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" successfully" Feb 13 15:58:04.347991 containerd[1756]: time="2025-02-13T15:58:04.347668382Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" returns successfully" Feb 13 15:58:04.349222 containerd[1756]: time="2025-02-13T15:58:04.348809864Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:04.349222 containerd[1756]: time="2025-02-13T15:58:04.348903064Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:04.349222 containerd[1756]: time="2025-02-13T15:58:04.348912704Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:04.349222 containerd[1756]: time="2025-02-13T15:58:04.349074825Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:04.349143 systemd[1]: run-netns-cni\x2d6b8260dd\x2dc9de\x2d9160\x2d57af\x2d1b5a45e72dd1.mount: Deactivated successfully. Feb 13 15:58:04.350678 containerd[1756]: time="2025-02-13T15:58:04.350646708Z" level=info msg="Ensure that sandbox 0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2 in task-service has been cleanup successfully" Feb 13 15:58:04.351664 containerd[1756]: time="2025-02-13T15:58:04.351626110Z" level=info msg="TearDown network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" successfully" Feb 13 15:58:04.353625 containerd[1756]: time="2025-02-13T15:58:04.353482353Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" returns successfully" Feb 13 15:58:04.353625 containerd[1756]: time="2025-02-13T15:58:04.353471753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:3,}" Feb 13 15:58:04.357961 systemd[1]: run-netns-cni\x2df811845b\x2d1325\x2d4df7\x2d3902\x2da1779366cd7f.mount: Deactivated successfully. Feb 13 15:58:04.360962 containerd[1756]: time="2025-02-13T15:58:04.360923608Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:04.362197 containerd[1756]: time="2025-02-13T15:58:04.361591730Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:04.362197 containerd[1756]: time="2025-02-13T15:58:04.362186371Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:04.363734 containerd[1756]: time="2025-02-13T15:58:04.362867052Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:04.363734 containerd[1756]: time="2025-02-13T15:58:04.362980652Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:04.363734 containerd[1756]: time="2025-02-13T15:58:04.363041732Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:04.363886 kubelet[3351]: I0213 15:58:04.363511 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8" Feb 13 15:58:04.364152 containerd[1756]: time="2025-02-13T15:58:04.364117935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:58:04.364872 containerd[1756]: time="2025-02-13T15:58:04.364834736Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" Feb 13 15:58:04.365112 containerd[1756]: time="2025-02-13T15:58:04.365044616Z" level=info msg="Ensure that sandbox 072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8 in task-service has been cleanup successfully" Feb 13 15:58:04.365258 containerd[1756]: time="2025-02-13T15:58:04.365230417Z" level=info msg="TearDown network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" successfully" Feb 13 15:58:04.365258 containerd[1756]: time="2025-02-13T15:58:04.365252977Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" returns successfully" Feb 13 15:58:04.366853 containerd[1756]: time="2025-02-13T15:58:04.366825980Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:04.367167 containerd[1756]: time="2025-02-13T15:58:04.367104381Z" level=info msg="TearDown network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" successfully" Feb 13 15:58:04.367167 containerd[1756]: time="2025-02-13T15:58:04.367120301Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" returns successfully" Feb 13 15:58:04.367894 containerd[1756]: time="2025-02-13T15:58:04.367663702Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:04.367894 containerd[1756]: time="2025-02-13T15:58:04.367750822Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:04.367894 containerd[1756]: time="2025-02-13T15:58:04.367762982Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:04.368824 containerd[1756]: time="2025-02-13T15:58:04.368599624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:58:04.368850 systemd[1]: run-netns-cni\x2da36b4044\x2d7cab\x2dbabe\x2d9e24\x2dc6c97cae3327.mount: Deactivated successfully. Feb 13 15:58:04.370742 kubelet[3351]: I0213 15:58:04.370502 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6" Feb 13 15:58:04.372436 containerd[1756]: time="2025-02-13T15:58:04.372261151Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" Feb 13 15:58:04.373022 containerd[1756]: time="2025-02-13T15:58:04.372716272Z" level=info msg="Ensure that sandbox da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6 in task-service has been cleanup successfully" Feb 13 15:58:04.373727 containerd[1756]: time="2025-02-13T15:58:04.373683354Z" level=info msg="TearDown network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" successfully" Feb 13 15:58:04.373727 containerd[1756]: time="2025-02-13T15:58:04.373716674Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" returns successfully" Feb 13 15:58:04.374239 kubelet[3351]: I0213 15:58:04.374182 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2" Feb 13 15:58:04.374894 containerd[1756]: time="2025-02-13T15:58:04.374680676Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:04.374894 containerd[1756]: time="2025-02-13T15:58:04.374845676Z" level=info msg="TearDown network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" successfully" Feb 13 15:58:04.374894 containerd[1756]: time="2025-02-13T15:58:04.374857476Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" returns successfully" Feb 13 15:58:04.375249 containerd[1756]: time="2025-02-13T15:58:04.375195557Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:04.375291 containerd[1756]: time="2025-02-13T15:58:04.375275277Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:04.375291 containerd[1756]: time="2025-02-13T15:58:04.375284997Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:04.376434 containerd[1756]: time="2025-02-13T15:58:04.376055798Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" Feb 13 15:58:04.376434 containerd[1756]: time="2025-02-13T15:58:04.376143039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:3,}" Feb 13 15:58:04.376796 containerd[1756]: time="2025-02-13T15:58:04.376757440Z" level=info msg="Ensure that sandbox a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2 in task-service has been cleanup successfully" Feb 13 15:58:04.376950 containerd[1756]: time="2025-02-13T15:58:04.376928320Z" level=info msg="TearDown network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" successfully" Feb 13 15:58:04.376982 containerd[1756]: time="2025-02-13T15:58:04.376947520Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" returns successfully" Feb 13 15:58:04.377545 containerd[1756]: time="2025-02-13T15:58:04.377321241Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:04.377545 containerd[1756]: time="2025-02-13T15:58:04.377404281Z" level=info msg="TearDown network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" successfully" Feb 13 15:58:04.377545 containerd[1756]: time="2025-02-13T15:58:04.377414881Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" returns successfully" Feb 13 15:58:04.378015 containerd[1756]: time="2025-02-13T15:58:04.377891322Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:04.378015 containerd[1756]: time="2025-02-13T15:58:04.377974162Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:04.378015 containerd[1756]: time="2025-02-13T15:58:04.377983282Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:04.379430 containerd[1756]: time="2025-02-13T15:58:04.379056044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:3,}" Feb 13 15:58:04.860596 systemd[1]: run-netns-cni\x2d1ec034b8\x2dc38b\x2d639c\x2dda7b\x2dbac0077af5af.mount: Deactivated successfully. Feb 13 15:58:04.860704 systemd[1]: run-netns-cni\x2d9f8c92f7\x2de5a1\x2d25ec\x2d037e\x2d7f13f9fe1134.mount: Deactivated successfully. Feb 13 15:58:05.296180 containerd[1756]: time="2025-02-13T15:58:05.295933187Z" level=error msg="Failed to destroy network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.297510 containerd[1756]: time="2025-02-13T15:58:05.297447470Z" level=error msg="encountered an error cleaning up failed sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.297628 containerd[1756]: time="2025-02-13T15:58:05.297536030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.298574 kubelet[3351]: E0213 15:58:05.297846 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.298574 kubelet[3351]: E0213 15:58:05.297905 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:05.298574 kubelet[3351]: E0213 15:58:05.297925 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:05.298738 kubelet[3351]: E0213 15:58:05.297993 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" Feb 13 15:58:05.380822 containerd[1756]: time="2025-02-13T15:58:05.380767196Z" level=error msg="Failed to destroy network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.383292 containerd[1756]: time="2025-02-13T15:58:05.383237041Z" level=error msg="encountered an error cleaning up failed sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.383439 containerd[1756]: time="2025-02-13T15:58:05.383367761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.383880 kubelet[3351]: E0213 15:58:05.383583 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.383880 kubelet[3351]: E0213 15:58:05.383640 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:05.383880 kubelet[3351]: E0213 15:58:05.383662 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:05.384208 kubelet[3351]: E0213 15:58:05.383720 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" podUID="17906d2c-fcbc-4df1-8ac2-176024c123e0" Feb 13 15:58:05.392816 kubelet[3351]: I0213 15:58:05.392672 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a" Feb 13 15:58:05.397801 containerd[1756]: time="2025-02-13T15:58:05.397666510Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" Feb 13 15:58:05.398046 containerd[1756]: time="2025-02-13T15:58:05.397873630Z" level=info msg="Ensure that sandbox 57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a in task-service has been cleanup successfully" Feb 13 15:58:05.399415 containerd[1756]: time="2025-02-13T15:58:05.399368073Z" level=info msg="TearDown network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" successfully" Feb 13 15:58:05.399415 containerd[1756]: time="2025-02-13T15:58:05.399405673Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" returns successfully" Feb 13 15:58:05.400586 containerd[1756]: time="2025-02-13T15:58:05.400505635Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:05.400989 containerd[1756]: time="2025-02-13T15:58:05.400952516Z" level=error msg="Failed to destroy network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.401391 containerd[1756]: time="2025-02-13T15:58:05.401365757Z" level=info msg="TearDown network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" successfully" Feb 13 15:58:05.401521 containerd[1756]: time="2025-02-13T15:58:05.401507157Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" returns successfully" Feb 13 15:58:05.403708 containerd[1756]: time="2025-02-13T15:58:05.403509441Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:05.404072 containerd[1756]: time="2025-02-13T15:58:05.404042722Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:05.404449 containerd[1756]: time="2025-02-13T15:58:05.404226683Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:05.404449 containerd[1756]: time="2025-02-13T15:58:05.404584243Z" level=error msg="encountered an error cleaning up failed sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.404731 containerd[1756]: time="2025-02-13T15:58:05.404677484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.405006 kubelet[3351]: E0213 15:58:05.404978 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.405056 kubelet[3351]: E0213 15:58:05.405028 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:05.405056 kubelet[3351]: E0213 15:58:05.405053 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:05.405136 kubelet[3351]: E0213 15:58:05.405101 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j7w87" podUID="a37d0509-8180-495d-aac5-1e394b4d33c7" Feb 13 15:58:05.405505 containerd[1756]: time="2025-02-13T15:58:05.405351405Z" level=error msg="Failed to destroy network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.406703 containerd[1756]: time="2025-02-13T15:58:05.406578407Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:05.407776 containerd[1756]: time="2025-02-13T15:58:05.406789008Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:05.407776 containerd[1756]: time="2025-02-13T15:58:05.406807288Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:05.407776 containerd[1756]: time="2025-02-13T15:58:05.407042928Z" level=error msg="encountered an error cleaning up failed sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.407776 containerd[1756]: time="2025-02-13T15:58:05.407106408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.407959 kubelet[3351]: E0213 15:58:05.407804 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.407959 kubelet[3351]: E0213 15:58:05.407851 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:05.407959 kubelet[3351]: E0213 15:58:05.407875 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:05.408433 kubelet[3351]: E0213 15:58:05.408290 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:58:05.410681 containerd[1756]: time="2025-02-13T15:58:05.410599735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:58:05.415805 containerd[1756]: time="2025-02-13T15:58:05.415756746Z" level=error msg="Failed to destroy network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.417315 containerd[1756]: time="2025-02-13T15:58:05.417131588Z" level=error msg="encountered an error cleaning up failed sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.417315 containerd[1756]: time="2025-02-13T15:58:05.417209148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.417591 kubelet[3351]: E0213 15:58:05.417439 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.417591 kubelet[3351]: E0213 15:58:05.417491 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:05.418794 kubelet[3351]: E0213 15:58:05.418687 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:05.418794 kubelet[3351]: E0213 15:58:05.418786 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xz2g6" podUID="7dd69579-7ca4-4802-a6f8-37ab66ddcef1" Feb 13 15:58:05.428162 containerd[1756]: time="2025-02-13T15:58:05.428105370Z" level=error msg="Failed to destroy network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.428604 containerd[1756]: time="2025-02-13T15:58:05.428485251Z" level=error msg="encountered an error cleaning up failed sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.428604 containerd[1756]: time="2025-02-13T15:58:05.428555171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.428948 kubelet[3351]: E0213 15:58:05.428799 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.428948 kubelet[3351]: E0213 15:58:05.428852 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:05.428948 kubelet[3351]: E0213 15:58:05.428875 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:05.429038 kubelet[3351]: E0213 15:58:05.428929 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" podUID="4713ae5a-4a4f-4494-8aa7-cdc51f64b486" Feb 13 15:58:05.528817 containerd[1756]: time="2025-02-13T15:58:05.528553250Z" level=error msg="Failed to destroy network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.530096 containerd[1756]: time="2025-02-13T15:58:05.529949413Z" level=error msg="encountered an error cleaning up failed sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.530499 containerd[1756]: time="2025-02-13T15:58:05.530285133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.531388 kubelet[3351]: E0213 15:58:05.531345 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:05.531476 kubelet[3351]: E0213 15:58:05.531414 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:05.531476 kubelet[3351]: E0213 15:58:05.531438 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:05.533071 kubelet[3351]: E0213 15:58:05.532092 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" Feb 13 15:58:05.864398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221-shm.mount: Deactivated successfully. Feb 13 15:58:05.864907 systemd[1]: run-netns-cni\x2d4fc3d8af\x2debe3\x2d22c8\x2d4044\x2d5251a888cdd4.mount: Deactivated successfully. Feb 13 15:58:05.864960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a-shm.mount: Deactivated successfully. Feb 13 15:58:06.399009 kubelet[3351]: I0213 15:58:06.398967 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf" Feb 13 15:58:06.399888 containerd[1756]: time="2025-02-13T15:58:06.399838753Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\"" Feb 13 15:58:06.403474 containerd[1756]: time="2025-02-13T15:58:06.400032273Z" level=info msg="Ensure that sandbox c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf in task-service has been cleanup successfully" Feb 13 15:58:06.403734 containerd[1756]: time="2025-02-13T15:58:06.403696761Z" level=info msg="TearDown network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" successfully" Feb 13 15:58:06.403734 containerd[1756]: time="2025-02-13T15:58:06.403730281Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" returns successfully" Feb 13 15:58:06.405560 containerd[1756]: time="2025-02-13T15:58:06.405441364Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" Feb 13 15:58:06.405656 systemd[1]: run-netns-cni\x2d7867ee18\x2d2434\x2d4878\x2d6cc1\x2db8c43f06eec0.mount: Deactivated successfully. Feb 13 15:58:06.406610 containerd[1756]: time="2025-02-13T15:58:06.406483646Z" level=info msg="TearDown network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" successfully" Feb 13 15:58:06.406610 containerd[1756]: time="2025-02-13T15:58:06.406512126Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" returns successfully" Feb 13 15:58:06.408729 containerd[1756]: time="2025-02-13T15:58:06.408699011Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:06.409366 containerd[1756]: time="2025-02-13T15:58:06.408954291Z" level=info msg="TearDown network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" successfully" Feb 13 15:58:06.409366 containerd[1756]: time="2025-02-13T15:58:06.408972211Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" returns successfully" Feb 13 15:58:06.409366 containerd[1756]: time="2025-02-13T15:58:06.409221252Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:06.409366 containerd[1756]: time="2025-02-13T15:58:06.409287572Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:06.409366 containerd[1756]: time="2025-02-13T15:58:06.409319372Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:06.410374 containerd[1756]: time="2025-02-13T15:58:06.410262454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:4,}" Feb 13 15:58:06.411196 kubelet[3351]: I0213 15:58:06.411159 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6" Feb 13 15:58:06.413372 containerd[1756]: time="2025-02-13T15:58:06.412573419Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\"" Feb 13 15:58:06.413890 containerd[1756]: time="2025-02-13T15:58:06.413844221Z" level=info msg="Ensure that sandbox 788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6 in task-service has been cleanup successfully" Feb 13 15:58:06.417788 systemd[1]: run-netns-cni\x2d2ed5b729\x2d9b11\x2d31b5\x2dd2e5\x2d22dfe13763c3.mount: Deactivated successfully. Feb 13 15:58:06.419279 containerd[1756]: time="2025-02-13T15:58:06.418866711Z" level=info msg="TearDown network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" successfully" Feb 13 15:58:06.419279 containerd[1756]: time="2025-02-13T15:58:06.418904432Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" returns successfully" Feb 13 15:58:06.421053 containerd[1756]: time="2025-02-13T15:58:06.421022156Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" Feb 13 15:58:06.421755 containerd[1756]: time="2025-02-13T15:58:06.421735877Z" level=info msg="TearDown network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" successfully" Feb 13 15:58:06.421974 containerd[1756]: time="2025-02-13T15:58:06.421854837Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" returns successfully" Feb 13 15:58:06.422745 containerd[1756]: time="2025-02-13T15:58:06.422722879Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:06.422928 containerd[1756]: time="2025-02-13T15:58:06.422911840Z" level=info msg="TearDown network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" successfully" Feb 13 15:58:06.423401 containerd[1756]: time="2025-02-13T15:58:06.423232880Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" returns successfully" Feb 13 15:58:06.423927 containerd[1756]: time="2025-02-13T15:58:06.423905602Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:06.424149 containerd[1756]: time="2025-02-13T15:58:06.424130722Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:06.424392 containerd[1756]: time="2025-02-13T15:58:06.424373923Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:06.425637 kubelet[3351]: I0213 15:58:06.425606 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221" Feb 13 15:58:06.426704 containerd[1756]: time="2025-02-13T15:58:06.426667487Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\"" Feb 13 15:58:06.426902 containerd[1756]: time="2025-02-13T15:58:06.426869688Z" level=info msg="Ensure that sandbox ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221 in task-service has been cleanup successfully" Feb 13 15:58:06.429997 containerd[1756]: time="2025-02-13T15:58:06.427229968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:4,}" Feb 13 15:58:06.431024 containerd[1756]: time="2025-02-13T15:58:06.430992856Z" level=info msg="TearDown network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" successfully" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.431346737Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" returns successfully" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.432991580Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.433092460Z" level=info msg="TearDown network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" successfully" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.433101740Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" returns successfully" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.434482943Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.434593663Z" level=info msg="TearDown network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" successfully" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.434603463Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" returns successfully" Feb 13 15:58:06.436037 containerd[1756]: time="2025-02-13T15:58:06.435508665Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:06.432880 systemd[1]: run-netns-cni\x2d64135d2b\x2dcf15\x2d0d71\x2d689a\x2dae4c3ac1c76b.mount: Deactivated successfully. Feb 13 15:58:06.436349 containerd[1756]: time="2025-02-13T15:58:06.436185946Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:06.436349 containerd[1756]: time="2025-02-13T15:58:06.436221747Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:06.438353 containerd[1756]: time="2025-02-13T15:58:06.437909550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:4,}" Feb 13 15:58:06.440004 kubelet[3351]: I0213 15:58:06.439384 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8" Feb 13 15:58:06.440733 containerd[1756]: time="2025-02-13T15:58:06.440685156Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\"" Feb 13 15:58:06.442488 containerd[1756]: time="2025-02-13T15:58:06.441187597Z" level=info msg="Ensure that sandbox 4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8 in task-service has been cleanup successfully" Feb 13 15:58:06.446479 containerd[1756]: time="2025-02-13T15:58:06.444678764Z" level=info msg="TearDown network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" successfully" Feb 13 15:58:06.447980 systemd[1]: run-netns-cni\x2d1d785ad1\x2d7efd\x2db16d\x2d5483\x2de57aca597061.mount: Deactivated successfully. Feb 13 15:58:06.448464 containerd[1756]: time="2025-02-13T15:58:06.448350171Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" returns successfully" Feb 13 15:58:06.451184 containerd[1756]: time="2025-02-13T15:58:06.450807296Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.451407257Z" level=info msg="TearDown network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" successfully" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.451426817Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" returns successfully" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.452085419Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.452185499Z" level=info msg="TearDown network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" successfully" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.452196339Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" returns successfully" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.452969740Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\"" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.453078381Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.453250581Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:06.453537 containerd[1756]: time="2025-02-13T15:58:06.453457301Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:06.453768 kubelet[3351]: I0213 15:58:06.451824 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72" Feb 13 15:58:06.453806 containerd[1756]: time="2025-02-13T15:58:06.453573942Z" level=info msg="Ensure that sandbox f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72 in task-service has been cleanup successfully" Feb 13 15:58:06.456032 containerd[1756]: time="2025-02-13T15:58:06.455989467Z" level=info msg="TearDown network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" successfully" Feb 13 15:58:06.456032 containerd[1756]: time="2025-02-13T15:58:06.456019947Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" returns successfully" Feb 13 15:58:06.456323 containerd[1756]: time="2025-02-13T15:58:06.456283107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:4,}" Feb 13 15:58:06.457960 containerd[1756]: time="2025-02-13T15:58:06.457245869Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" Feb 13 15:58:06.458077 containerd[1756]: time="2025-02-13T15:58:06.457960471Z" level=info msg="TearDown network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" successfully" Feb 13 15:58:06.458077 containerd[1756]: time="2025-02-13T15:58:06.458003391Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" returns successfully" Feb 13 15:58:06.459527 kubelet[3351]: I0213 15:58:06.459386 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025" Feb 13 15:58:06.459800 containerd[1756]: time="2025-02-13T15:58:06.458813112Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:06.463037 containerd[1756]: time="2025-02-13T15:58:06.462248639Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\"" Feb 13 15:58:06.463037 containerd[1756]: time="2025-02-13T15:58:06.462887161Z" level=info msg="Ensure that sandbox 028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025 in task-service has been cleanup successfully" Feb 13 15:58:06.464090 containerd[1756]: time="2025-02-13T15:58:06.463138521Z" level=info msg="TearDown network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" successfully" Feb 13 15:58:06.464090 containerd[1756]: time="2025-02-13T15:58:06.463196761Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" returns successfully" Feb 13 15:58:06.464090 containerd[1756]: time="2025-02-13T15:58:06.463241481Z" level=info msg="TearDown network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" successfully" Feb 13 15:58:06.464090 containerd[1756]: time="2025-02-13T15:58:06.463259161Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" returns successfully" Feb 13 15:58:06.466223 containerd[1756]: time="2025-02-13T15:58:06.465897967Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:06.466223 containerd[1756]: time="2025-02-13T15:58:06.466030127Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:06.466223 containerd[1756]: time="2025-02-13T15:58:06.466040927Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:06.466223 containerd[1756]: time="2025-02-13T15:58:06.466121967Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" Feb 13 15:58:06.466223 containerd[1756]: time="2025-02-13T15:58:06.466176367Z" level=info msg="TearDown network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" successfully" Feb 13 15:58:06.466223 containerd[1756]: time="2025-02-13T15:58:06.466184847Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" returns successfully" Feb 13 15:58:06.467724 containerd[1756]: time="2025-02-13T15:58:06.467664090Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:06.467842 containerd[1756]: time="2025-02-13T15:58:06.467770210Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:06.467842 containerd[1756]: time="2025-02-13T15:58:06.467785050Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:06.468452 containerd[1756]: time="2025-02-13T15:58:06.467889251Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:06.468452 containerd[1756]: time="2025-02-13T15:58:06.467955451Z" level=info msg="TearDown network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" successfully" Feb 13 15:58:06.468452 containerd[1756]: time="2025-02-13T15:58:06.467965251Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" returns successfully" Feb 13 15:58:06.469022 containerd[1756]: time="2025-02-13T15:58:06.468996373Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:06.469341 containerd[1756]: time="2025-02-13T15:58:06.469322734Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:06.469439 containerd[1756]: time="2025-02-13T15:58:06.469423694Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:06.470410 containerd[1756]: time="2025-02-13T15:58:06.470379096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:58:06.470638 containerd[1756]: time="2025-02-13T15:58:06.470612496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:58:06.589251 containerd[1756]: time="2025-02-13T15:58:06.589179256Z" level=error msg="Failed to destroy network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.590289 containerd[1756]: time="2025-02-13T15:58:06.590240458Z" level=error msg="encountered an error cleaning up failed sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.590460 containerd[1756]: time="2025-02-13T15:58:06.590344018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.590608 kubelet[3351]: E0213 15:58:06.590580 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.590672 kubelet[3351]: E0213 15:58:06.590638 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:06.590672 kubelet[3351]: E0213 15:58:06.590658 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:06.590743 kubelet[3351]: E0213 15:58:06.590722 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xz2g6" podUID="7dd69579-7ca4-4802-a6f8-37ab66ddcef1" Feb 13 15:58:06.785887 containerd[1756]: time="2025-02-13T15:58:06.785684814Z" level=error msg="Failed to destroy network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.786504 containerd[1756]: time="2025-02-13T15:58:06.786431255Z" level=error msg="encountered an error cleaning up failed sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.787228 containerd[1756]: time="2025-02-13T15:58:06.786511975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.787292 kubelet[3351]: E0213 15:58:06.786777 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.787292 kubelet[3351]: E0213 15:58:06.786832 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:06.787292 kubelet[3351]: E0213 15:58:06.786861 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:06.787419 kubelet[3351]: E0213 15:58:06.786997 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j7w87" podUID="a37d0509-8180-495d-aac5-1e394b4d33c7" Feb 13 15:58:06.803159 containerd[1756]: time="2025-02-13T15:58:06.802714848Z" level=error msg="Failed to destroy network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.803792 containerd[1756]: time="2025-02-13T15:58:06.803639810Z" level=error msg="encountered an error cleaning up failed sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.803996 containerd[1756]: time="2025-02-13T15:58:06.803971051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.804782 kubelet[3351]: E0213 15:58:06.804456 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.804782 kubelet[3351]: E0213 15:58:06.804527 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:06.804782 kubelet[3351]: E0213 15:58:06.804548 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:06.805005 kubelet[3351]: E0213 15:58:06.804618 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" podUID="4713ae5a-4a4f-4494-8aa7-cdc51f64b486" Feb 13 15:58:06.806363 containerd[1756]: time="2025-02-13T15:58:06.806196615Z" level=error msg="Failed to destroy network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.807594 containerd[1756]: time="2025-02-13T15:58:06.807366457Z" level=error msg="encountered an error cleaning up failed sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.807805 containerd[1756]: time="2025-02-13T15:58:06.807754178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.808421 kubelet[3351]: E0213 15:58:06.808307 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.808779 kubelet[3351]: E0213 15:58:06.808630 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:06.808779 kubelet[3351]: E0213 15:58:06.808663 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:06.808779 kubelet[3351]: E0213 15:58:06.808744 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" podUID="17906d2c-fcbc-4df1-8ac2-176024c123e0" Feb 13 15:58:06.811088 containerd[1756]: time="2025-02-13T15:58:06.811036025Z" level=error msg="Failed to destroy network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.811760 containerd[1756]: time="2025-02-13T15:58:06.811590986Z" level=error msg="encountered an error cleaning up failed sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.811760 containerd[1756]: time="2025-02-13T15:58:06.811663626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.812133 kubelet[3351]: E0213 15:58:06.812006 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.812198 kubelet[3351]: E0213 15:58:06.812161 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:06.812198 kubelet[3351]: E0213 15:58:06.812183 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:06.812349 kubelet[3351]: E0213 15:58:06.812241 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" Feb 13 15:58:06.823158 containerd[1756]: time="2025-02-13T15:58:06.823102409Z" level=error msg="Failed to destroy network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.823976 containerd[1756]: time="2025-02-13T15:58:06.823810291Z" level=error msg="encountered an error cleaning up failed sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.823976 containerd[1756]: time="2025-02-13T15:58:06.823914691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.824184 kubelet[3351]: E0213 15:58:06.824145 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:06.824438 kubelet[3351]: E0213 15:58:06.824200 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:06.824438 kubelet[3351]: E0213 15:58:06.824225 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:06.824438 kubelet[3351]: E0213 15:58:06.824370 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:58:06.863235 systemd[1]: run-netns-cni\x2d27ff11df\x2de8d4\x2dca09\x2de20f\x2d6e08725e2258.mount: Deactivated successfully. Feb 13 15:58:06.863832 systemd[1]: run-netns-cni\x2ddec9812c\x2d8ee6\x2d6ff8\x2dfcb1\x2dd77f3df8efdb.mount: Deactivated successfully. Feb 13 15:58:07.465991 kubelet[3351]: I0213 15:58:07.465945 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb" Feb 13 15:58:07.468323 containerd[1756]: time="2025-02-13T15:58:07.467979314Z" level=info msg="StopPodSandbox for \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\"" Feb 13 15:58:07.468323 containerd[1756]: time="2025-02-13T15:58:07.468166994Z" level=info msg="Ensure that sandbox 2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb in task-service has been cleanup successfully" Feb 13 15:58:07.470790 containerd[1756]: time="2025-02-13T15:58:07.470736280Z" level=info msg="TearDown network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" successfully" Feb 13 15:58:07.470790 containerd[1756]: time="2025-02-13T15:58:07.470778240Z" level=info msg="StopPodSandbox for \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" returns successfully" Feb 13 15:58:07.472739 containerd[1756]: time="2025-02-13T15:58:07.472289243Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\"" Feb 13 15:58:07.474051 containerd[1756]: time="2025-02-13T15:58:07.473592085Z" level=info msg="TearDown network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" successfully" Feb 13 15:58:07.474051 containerd[1756]: time="2025-02-13T15:58:07.473621246Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" returns successfully" Feb 13 15:58:07.473942 systemd[1]: run-netns-cni\x2d095fbb5c\x2d7adb\x2da91b\x2d672b\x2dc35e19438f1f.mount: Deactivated successfully. Feb 13 15:58:07.477710 containerd[1756]: time="2025-02-13T15:58:07.477656054Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" Feb 13 15:58:07.477796 containerd[1756]: time="2025-02-13T15:58:07.477766134Z" level=info msg="TearDown network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" successfully" Feb 13 15:58:07.477796 containerd[1756]: time="2025-02-13T15:58:07.477776894Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" returns successfully" Feb 13 15:58:07.478756 containerd[1756]: time="2025-02-13T15:58:07.478717616Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:07.478857 containerd[1756]: time="2025-02-13T15:58:07.478805456Z" level=info msg="TearDown network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" successfully" Feb 13 15:58:07.478857 containerd[1756]: time="2025-02-13T15:58:07.478815656Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" returns successfully" Feb 13 15:58:07.479753 containerd[1756]: time="2025-02-13T15:58:07.479702538Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:07.479838 containerd[1756]: time="2025-02-13T15:58:07.479825058Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:07.479866 containerd[1756]: time="2025-02-13T15:58:07.479837338Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:07.480279 kubelet[3351]: I0213 15:58:07.480243 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db" Feb 13 15:58:07.481846 containerd[1756]: time="2025-02-13T15:58:07.481812622Z" level=info msg="StopPodSandbox for \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\"" Feb 13 15:58:07.482000 containerd[1756]: time="2025-02-13T15:58:07.481978422Z" level=info msg="Ensure that sandbox f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db in task-service has been cleanup successfully" Feb 13 15:58:07.483424 containerd[1756]: time="2025-02-13T15:58:07.483387265Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:07.484005 containerd[1756]: time="2025-02-13T15:58:07.483784666Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:07.484005 containerd[1756]: time="2025-02-13T15:58:07.483802666Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:07.485331 containerd[1756]: time="2025-02-13T15:58:07.485040869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:58:07.487457 systemd[1]: run-netns-cni\x2df1f16e58\x2d0578\x2d17b4\x2d3835\x2d8b2c1ccfc9ae.mount: Deactivated successfully. Feb 13 15:58:07.488209 containerd[1756]: time="2025-02-13T15:58:07.488102115Z" level=info msg="TearDown network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" successfully" Feb 13 15:58:07.488209 containerd[1756]: time="2025-02-13T15:58:07.488145475Z" level=info msg="StopPodSandbox for \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" returns successfully" Feb 13 15:58:07.489199 kubelet[3351]: I0213 15:58:07.489102 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8" Feb 13 15:58:07.491315 containerd[1756]: time="2025-02-13T15:58:07.490720760Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\"" Feb 13 15:58:07.492818 containerd[1756]: time="2025-02-13T15:58:07.491933403Z" level=info msg="TearDown network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" successfully" Feb 13 15:58:07.492818 containerd[1756]: time="2025-02-13T15:58:07.491971643Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" returns successfully" Feb 13 15:58:07.492818 containerd[1756]: time="2025-02-13T15:58:07.492237203Z" level=info msg="StopPodSandbox for \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\"" Feb 13 15:58:07.493185 containerd[1756]: time="2025-02-13T15:58:07.493148845Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" Feb 13 15:58:07.493292 containerd[1756]: time="2025-02-13T15:58:07.493268605Z" level=info msg="TearDown network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" successfully" Feb 13 15:58:07.493292 containerd[1756]: time="2025-02-13T15:58:07.493336685Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" returns successfully" Feb 13 15:58:07.493853 containerd[1756]: time="2025-02-13T15:58:07.493790086Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:07.494102 containerd[1756]: time="2025-02-13T15:58:07.493859486Z" level=info msg="TearDown network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" successfully" Feb 13 15:58:07.494102 containerd[1756]: time="2025-02-13T15:58:07.493896767Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" returns successfully" Feb 13 15:58:07.495035 containerd[1756]: time="2025-02-13T15:58:07.494429488Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:07.495035 containerd[1756]: time="2025-02-13T15:58:07.494504328Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:07.495035 containerd[1756]: time="2025-02-13T15:58:07.494513248Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:07.495185 containerd[1756]: time="2025-02-13T15:58:07.495059129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:58:07.496091 containerd[1756]: time="2025-02-13T15:58:07.495826690Z" level=info msg="Ensure that sandbox 043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8 in task-service has been cleanup successfully" Feb 13 15:58:07.496547 containerd[1756]: time="2025-02-13T15:58:07.496494332Z" level=info msg="TearDown network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" successfully" Feb 13 15:58:07.496547 containerd[1756]: time="2025-02-13T15:58:07.496520172Z" level=info msg="StopPodSandbox for \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" returns successfully" Feb 13 15:58:07.499626 containerd[1756]: time="2025-02-13T15:58:07.499570098Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\"" Feb 13 15:58:07.500148 systemd[1]: run-netns-cni\x2d472eb8ba\x2dde5e\x2da892\x2d65df\x2dfddedede4f0b.mount: Deactivated successfully. Feb 13 15:58:07.500423 containerd[1756]: time="2025-02-13T15:58:07.500148939Z" level=info msg="TearDown network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" successfully" Feb 13 15:58:07.500423 containerd[1756]: time="2025-02-13T15:58:07.500231499Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" returns successfully" Feb 13 15:58:07.501281 kubelet[3351]: I0213 15:58:07.501139 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666" Feb 13 15:58:07.502604 containerd[1756]: time="2025-02-13T15:58:07.501780302Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" Feb 13 15:58:07.502604 containerd[1756]: time="2025-02-13T15:58:07.501883823Z" level=info msg="TearDown network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" successfully" Feb 13 15:58:07.502604 containerd[1756]: time="2025-02-13T15:58:07.501894343Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" returns successfully" Feb 13 15:58:07.502604 containerd[1756]: time="2025-02-13T15:58:07.502542584Z" level=info msg="StopPodSandbox for \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\"" Feb 13 15:58:07.503310 containerd[1756]: time="2025-02-13T15:58:07.502724024Z" level=info msg="Ensure that sandbox d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666 in task-service has been cleanup successfully" Feb 13 15:58:07.503310 containerd[1756]: time="2025-02-13T15:58:07.503044065Z" level=info msg="TearDown network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" successfully" Feb 13 15:58:07.503310 containerd[1756]: time="2025-02-13T15:58:07.503062945Z" level=info msg="StopPodSandbox for \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" returns successfully" Feb 13 15:58:07.503310 containerd[1756]: time="2025-02-13T15:58:07.503123505Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:07.503310 containerd[1756]: time="2025-02-13T15:58:07.503189465Z" level=info msg="TearDown network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" successfully" Feb 13 15:58:07.503310 containerd[1756]: time="2025-02-13T15:58:07.503198425Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" returns successfully" Feb 13 15:58:07.505953 containerd[1756]: time="2025-02-13T15:58:07.504634988Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:07.505953 containerd[1756]: time="2025-02-13T15:58:07.504734108Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:07.505953 containerd[1756]: time="2025-02-13T15:58:07.504743828Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:07.505953 containerd[1756]: time="2025-02-13T15:58:07.504790989Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\"" Feb 13 15:58:07.505953 containerd[1756]: time="2025-02-13T15:58:07.504848349Z" level=info msg="TearDown network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" successfully" Feb 13 15:58:07.505953 containerd[1756]: time="2025-02-13T15:58:07.504857109Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" returns successfully" Feb 13 15:58:07.506628 systemd[1]: run-netns-cni\x2d9aad8110\x2d4ba6\x2dd510\x2debc1\x2d8de13059accf.mount: Deactivated successfully. Feb 13 15:58:07.508768 containerd[1756]: time="2025-02-13T15:58:07.507746115Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" Feb 13 15:58:07.508768 containerd[1756]: time="2025-02-13T15:58:07.507862355Z" level=info msg="TearDown network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" successfully" Feb 13 15:58:07.508768 containerd[1756]: time="2025-02-13T15:58:07.507873035Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" returns successfully" Feb 13 15:58:07.508768 containerd[1756]: time="2025-02-13T15:58:07.508534396Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:07.508768 containerd[1756]: time="2025-02-13T15:58:07.508737877Z" level=info msg="TearDown network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" successfully" Feb 13 15:58:07.508768 containerd[1756]: time="2025-02-13T15:58:07.508751757Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" returns successfully" Feb 13 15:58:07.509616 containerd[1756]: time="2025-02-13T15:58:07.509205158Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:07.509616 containerd[1756]: time="2025-02-13T15:58:07.509277798Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:07.509616 containerd[1756]: time="2025-02-13T15:58:07.509287358Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:07.510204 containerd[1756]: time="2025-02-13T15:58:07.510144919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:5,}" Feb 13 15:58:07.510688 kubelet[3351]: I0213 15:58:07.510653 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf" Feb 13 15:58:07.511514 containerd[1756]: time="2025-02-13T15:58:07.511140281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:5,}" Feb 13 15:58:07.511878 containerd[1756]: time="2025-02-13T15:58:07.511851923Z" level=info msg="StopPodSandbox for \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\"" Feb 13 15:58:07.512832 containerd[1756]: time="2025-02-13T15:58:07.512784285Z" level=info msg="Ensure that sandbox a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf in task-service has been cleanup successfully" Feb 13 15:58:07.515496 containerd[1756]: time="2025-02-13T15:58:07.515276930Z" level=info msg="TearDown network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" successfully" Feb 13 15:58:07.515496 containerd[1756]: time="2025-02-13T15:58:07.515430930Z" level=info msg="StopPodSandbox for \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" returns successfully" Feb 13 15:58:07.517157 containerd[1756]: time="2025-02-13T15:58:07.517096733Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\"" Feb 13 15:58:07.517666 containerd[1756]: time="2025-02-13T15:58:07.517612815Z" level=info msg="TearDown network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" successfully" Feb 13 15:58:07.517666 containerd[1756]: time="2025-02-13T15:58:07.517634495Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" returns successfully" Feb 13 15:58:07.519027 containerd[1756]: time="2025-02-13T15:58:07.518985777Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" Feb 13 15:58:07.519286 containerd[1756]: time="2025-02-13T15:58:07.519086657Z" level=info msg="TearDown network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" successfully" Feb 13 15:58:07.519286 containerd[1756]: time="2025-02-13T15:58:07.519097458Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" returns successfully" Feb 13 15:58:07.520681 containerd[1756]: time="2025-02-13T15:58:07.520409100Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:07.520681 containerd[1756]: time="2025-02-13T15:58:07.520601021Z" level=info msg="TearDown network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" successfully" Feb 13 15:58:07.520681 containerd[1756]: time="2025-02-13T15:58:07.520620821Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" returns successfully" Feb 13 15:58:07.520832 kubelet[3351]: I0213 15:58:07.520675 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76" Feb 13 15:58:07.523415 containerd[1756]: time="2025-02-13T15:58:07.521538742Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:07.523415 containerd[1756]: time="2025-02-13T15:58:07.523017705Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:07.523415 containerd[1756]: time="2025-02-13T15:58:07.523041986Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:07.523415 containerd[1756]: time="2025-02-13T15:58:07.521947383Z" level=info msg="StopPodSandbox for \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\"" Feb 13 15:58:07.523415 containerd[1756]: time="2025-02-13T15:58:07.523275986Z" level=info msg="Ensure that sandbox bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76 in task-service has been cleanup successfully" Feb 13 15:58:07.524396 containerd[1756]: time="2025-02-13T15:58:07.524352268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:5,}" Feb 13 15:58:07.526268 containerd[1756]: time="2025-02-13T15:58:07.525808711Z" level=info msg="TearDown network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" successfully" Feb 13 15:58:07.526268 containerd[1756]: time="2025-02-13T15:58:07.525837991Z" level=info msg="StopPodSandbox for \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" returns successfully" Feb 13 15:58:07.527688 containerd[1756]: time="2025-02-13T15:58:07.527658595Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\"" Feb 13 15:58:07.527984 containerd[1756]: time="2025-02-13T15:58:07.527967395Z" level=info msg="TearDown network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" successfully" Feb 13 15:58:07.528156 containerd[1756]: time="2025-02-13T15:58:07.528140236Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" returns successfully" Feb 13 15:58:07.529268 containerd[1756]: time="2025-02-13T15:58:07.529092438Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" Feb 13 15:58:07.529612 containerd[1756]: time="2025-02-13T15:58:07.529585239Z" level=info msg="TearDown network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" successfully" Feb 13 15:58:07.529792 containerd[1756]: time="2025-02-13T15:58:07.529773519Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" returns successfully" Feb 13 15:58:07.532570 containerd[1756]: time="2025-02-13T15:58:07.532383564Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:07.532570 containerd[1756]: time="2025-02-13T15:58:07.532486165Z" level=info msg="TearDown network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" successfully" Feb 13 15:58:07.532570 containerd[1756]: time="2025-02-13T15:58:07.532495845Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" returns successfully" Feb 13 15:58:07.533456 containerd[1756]: time="2025-02-13T15:58:07.533176646Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:07.533456 containerd[1756]: time="2025-02-13T15:58:07.533278366Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:07.533456 containerd[1756]: time="2025-02-13T15:58:07.533331086Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:07.534031 containerd[1756]: time="2025-02-13T15:58:07.533981808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:5,}" Feb 13 15:58:07.871848 systemd[1]: run-netns-cni\x2d16ebde55\x2d5452\x2ddb2c\x2de1fe\x2dd2f1612e2ee9.mount: Deactivated successfully. Feb 13 15:58:07.871942 systemd[1]: run-netns-cni\x2d316f35db\x2d004f\x2d9122\x2d361f\x2db53d1a6b3b95.mount: Deactivated successfully. Feb 13 15:58:07.927875 containerd[1756]: time="2025-02-13T15:58:07.927817364Z" level=error msg="Failed to destroy network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.928869 containerd[1756]: time="2025-02-13T15:58:07.928828047Z" level=error msg="encountered an error cleaning up failed sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.929239 containerd[1756]: time="2025-02-13T15:58:07.929117567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.929722 kubelet[3351]: E0213 15:58:07.929697 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.930237 kubelet[3351]: E0213 15:58:07.930101 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:07.930237 kubelet[3351]: E0213 15:58:07.930132 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbpxc" Feb 13 15:58:07.930237 kubelet[3351]: E0213 15:58:07.930199 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbpxc_calico-system(2d8778e0-23a8-47a6-b01b-5b701fc009d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbpxc" podUID="2d8778e0-23a8-47a6-b01b-5b701fc009d0" Feb 13 15:58:07.933270 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493-shm.mount: Deactivated successfully. Feb 13 15:58:07.983518 containerd[1756]: time="2025-02-13T15:58:07.983473717Z" level=error msg="Failed to destroy network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.984565 containerd[1756]: time="2025-02-13T15:58:07.984482239Z" level=error msg="encountered an error cleaning up failed sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.984565 containerd[1756]: time="2025-02-13T15:58:07.984565879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.984888 kubelet[3351]: E0213 15:58:07.984859 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:07.984947 kubelet[3351]: E0213 15:58:07.984932 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:07.984973 kubelet[3351]: E0213 15:58:07.984960 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" Feb 13 15:58:07.985045 kubelet[3351]: E0213 15:58:07.985029 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-9dqxt_calico-apiserver(4713ae5a-4a4f-4494-8aa7-cdc51f64b486)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" podUID="4713ae5a-4a4f-4494-8aa7-cdc51f64b486" Feb 13 15:58:07.990390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2-shm.mount: Deactivated successfully. Feb 13 15:58:08.048364 containerd[1756]: time="2025-02-13T15:58:08.048272728Z" level=error msg="Failed to destroy network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.050674 containerd[1756]: time="2025-02-13T15:58:08.050512373Z" level=error msg="encountered an error cleaning up failed sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.052575 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8-shm.mount: Deactivated successfully. Feb 13 15:58:08.054202 containerd[1756]: time="2025-02-13T15:58:08.054144700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.054585 kubelet[3351]: E0213 15:58:08.054462 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.054585 kubelet[3351]: E0213 15:58:08.054587 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:08.054804 kubelet[3351]: E0213 15:58:08.054621 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" Feb 13 15:58:08.054804 kubelet[3351]: E0213 15:58:08.054678 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684dd8d987-7vztn_calico-system(17906d2c-fcbc-4df1-8ac2-176024c123e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" podUID="17906d2c-fcbc-4df1-8ac2-176024c123e0" Feb 13 15:58:08.056733 containerd[1756]: time="2025-02-13T15:58:08.056601545Z" level=error msg="Failed to destroy network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.060165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc-shm.mount: Deactivated successfully. Feb 13 15:58:08.061664 containerd[1756]: time="2025-02-13T15:58:08.061259194Z" level=error msg="encountered an error cleaning up failed sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.061664 containerd[1756]: time="2025-02-13T15:58:08.061379595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.061886 kubelet[3351]: E0213 15:58:08.061660 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.061886 kubelet[3351]: E0213 15:58:08.061725 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:08.061886 kubelet[3351]: E0213 15:58:08.061746 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" Feb 13 15:58:08.062025 kubelet[3351]: E0213 15:58:08.061845 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5885744d75-ph4x5_calico-apiserver(38b25543-f3d8-4325-8684-f120f4c5229a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podUID="38b25543-f3d8-4325-8684-f120f4c5229a" Feb 13 15:58:08.070570 containerd[1756]: time="2025-02-13T15:58:08.070327653Z" level=error msg="Failed to destroy network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.071924 containerd[1756]: time="2025-02-13T15:58:08.071693376Z" level=error msg="encountered an error cleaning up failed sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.072533 containerd[1756]: time="2025-02-13T15:58:08.072329737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.072948 kubelet[3351]: E0213 15:58:08.072816 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.072948 kubelet[3351]: E0213 15:58:08.072869 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:08.072948 kubelet[3351]: E0213 15:58:08.072902 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j7w87" Feb 13 15:58:08.073321 kubelet[3351]: E0213 15:58:08.072975 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j7w87_kube-system(a37d0509-8180-495d-aac5-1e394b4d33c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j7w87" podUID="a37d0509-8180-495d-aac5-1e394b4d33c7" Feb 13 15:58:08.079860 containerd[1756]: time="2025-02-13T15:58:08.079722712Z" level=error msg="Failed to destroy network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.080286 containerd[1756]: time="2025-02-13T15:58:08.080259873Z" level=error msg="encountered an error cleaning up failed sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.080541 containerd[1756]: time="2025-02-13T15:58:08.080435553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.081143 kubelet[3351]: E0213 15:58:08.080767 3351 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:58:08.081143 kubelet[3351]: E0213 15:58:08.080823 3351 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:08.081143 kubelet[3351]: E0213 15:58:08.080843 3351 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xz2g6" Feb 13 15:58:08.081446 kubelet[3351]: E0213 15:58:08.080900 3351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xz2g6_kube-system(7dd69579-7ca4-4802-a6f8-37ab66ddcef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xz2g6" podUID="7dd69579-7ca4-4802-a6f8-37ab66ddcef1" Feb 13 15:58:08.149125 containerd[1756]: time="2025-02-13T15:58:08.148981732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:08.152338 containerd[1756]: time="2025-02-13T15:58:08.152196738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:58:08.157257 containerd[1756]: time="2025-02-13T15:58:08.156452147Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:08.162432 containerd[1756]: time="2025-02-13T15:58:08.162385599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:08.163500 containerd[1756]: time="2025-02-13T15:58:08.163382361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.873994831s" Feb 13 15:58:08.163500 containerd[1756]: time="2025-02-13T15:58:08.163418921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:58:08.172145 containerd[1756]: time="2025-02-13T15:58:08.172095619Z" level=info msg="CreateContainer within sandbox \"2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:58:08.219250 containerd[1756]: time="2025-02-13T15:58:08.219175914Z" level=info msg="CreateContainer within sandbox \"2ba2988873d1d88761c37c4c2a874a4714cde096e239f70a45cfb5942d40e7b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91f8579597d0d73d73b2d52edd3a6e2a80566b47a1f624eb2ed5d2ee835d5f5a\"" Feb 13 15:58:08.219951 containerd[1756]: time="2025-02-13T15:58:08.219751515Z" level=info msg="StartContainer for \"91f8579597d0d73d73b2d52edd3a6e2a80566b47a1f624eb2ed5d2ee835d5f5a\"" Feb 13 15:58:08.248505 systemd[1]: Started cri-containerd-91f8579597d0d73d73b2d52edd3a6e2a80566b47a1f624eb2ed5d2ee835d5f5a.scope - libcontainer container 91f8579597d0d73d73b2d52edd3a6e2a80566b47a1f624eb2ed5d2ee835d5f5a. Feb 13 15:58:08.289053 containerd[1756]: time="2025-02-13T15:58:08.288994095Z" level=info msg="StartContainer for \"91f8579597d0d73d73b2d52edd3a6e2a80566b47a1f624eb2ed5d2ee835d5f5a\" returns successfully" Feb 13 15:58:08.389844 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:58:08.389957 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:58:08.538817 kubelet[3351]: I0213 15:58:08.538776 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8" Feb 13 15:58:08.540667 containerd[1756]: time="2025-02-13T15:58:08.540168763Z" level=info msg="StopPodSandbox for \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\"" Feb 13 15:58:08.540667 containerd[1756]: time="2025-02-13T15:58:08.540378804Z" level=info msg="Ensure that sandbox 6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8 in task-service has been cleanup successfully" Feb 13 15:58:08.541782 containerd[1756]: time="2025-02-13T15:58:08.541553766Z" level=info msg="TearDown network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\" successfully" Feb 13 15:58:08.541782 containerd[1756]: time="2025-02-13T15:58:08.541603446Z" level=info msg="StopPodSandbox for \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\" returns successfully" Feb 13 15:58:08.542170 containerd[1756]: time="2025-02-13T15:58:08.542046247Z" level=info msg="StopPodSandbox for \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\"" Feb 13 15:58:08.542170 containerd[1756]: time="2025-02-13T15:58:08.542128647Z" level=info msg="TearDown network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" successfully" Feb 13 15:58:08.542170 containerd[1756]: time="2025-02-13T15:58:08.542138687Z" level=info msg="StopPodSandbox for \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" returns successfully" Feb 13 15:58:08.542823 containerd[1756]: time="2025-02-13T15:58:08.542563208Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\"" Feb 13 15:58:08.543270 containerd[1756]: time="2025-02-13T15:58:08.543181410Z" level=info msg="TearDown network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" successfully" Feb 13 15:58:08.543270 containerd[1756]: time="2025-02-13T15:58:08.543202330Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" returns successfully" Feb 13 15:58:08.544171 containerd[1756]: time="2025-02-13T15:58:08.544116571Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" Feb 13 15:58:08.544366 containerd[1756]: time="2025-02-13T15:58:08.544289012Z" level=info msg="TearDown network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" successfully" Feb 13 15:58:08.544366 containerd[1756]: time="2025-02-13T15:58:08.544330932Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" returns successfully" Feb 13 15:58:08.545125 containerd[1756]: time="2025-02-13T15:58:08.545080533Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:08.545603 containerd[1756]: time="2025-02-13T15:58:08.545220574Z" level=info msg="TearDown network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" successfully" Feb 13 15:58:08.545603 containerd[1756]: time="2025-02-13T15:58:08.545237254Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" returns successfully" Feb 13 15:58:08.546745 containerd[1756]: time="2025-02-13T15:58:08.546641177Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:08.546893 kubelet[3351]: I0213 15:58:08.546859 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493" Feb 13 15:58:08.548323 containerd[1756]: time="2025-02-13T15:58:08.547430658Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:08.548323 containerd[1756]: time="2025-02-13T15:58:08.547474898Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:08.549193 containerd[1756]: time="2025-02-13T15:58:08.548888021Z" level=info msg="StopPodSandbox for \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\"" Feb 13 15:58:08.549193 containerd[1756]: time="2025-02-13T15:58:08.549068901Z" level=info msg="Ensure that sandbox 411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493 in task-service has been cleanup successfully" Feb 13 15:58:08.551021 containerd[1756]: time="2025-02-13T15:58:08.550981145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:6,}" Feb 13 15:58:08.551122 containerd[1756]: time="2025-02-13T15:58:08.551102266Z" level=info msg="TearDown network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\" successfully" Feb 13 15:58:08.551596 containerd[1756]: time="2025-02-13T15:58:08.551280186Z" level=info msg="StopPodSandbox for \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\" returns successfully" Feb 13 15:58:08.553119 containerd[1756]: time="2025-02-13T15:58:08.552802669Z" level=info msg="StopPodSandbox for \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\"" Feb 13 15:58:08.553669 containerd[1756]: time="2025-02-13T15:58:08.553633351Z" level=info msg="TearDown network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" successfully" Feb 13 15:58:08.553746 containerd[1756]: time="2025-02-13T15:58:08.553685711Z" level=info msg="StopPodSandbox for \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" returns successfully" Feb 13 15:58:08.555186 containerd[1756]: time="2025-02-13T15:58:08.555080514Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\"" Feb 13 15:58:08.555582 containerd[1756]: time="2025-02-13T15:58:08.555549195Z" level=info msg="TearDown network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" successfully" Feb 13 15:58:08.555582 containerd[1756]: time="2025-02-13T15:58:08.555576675Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" returns successfully" Feb 13 15:58:08.556264 containerd[1756]: time="2025-02-13T15:58:08.556233156Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" Feb 13 15:58:08.556451 containerd[1756]: time="2025-02-13T15:58:08.556426956Z" level=info msg="TearDown network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" successfully" Feb 13 15:58:08.556495 containerd[1756]: time="2025-02-13T15:58:08.556448196Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" returns successfully" Feb 13 15:58:08.558149 containerd[1756]: time="2025-02-13T15:58:08.558076720Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:08.558310 containerd[1756]: time="2025-02-13T15:58:08.558254520Z" level=info msg="TearDown network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" successfully" Feb 13 15:58:08.558356 containerd[1756]: time="2025-02-13T15:58:08.558313640Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" returns successfully" Feb 13 15:58:08.559360 containerd[1756]: time="2025-02-13T15:58:08.559325162Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:08.559687 containerd[1756]: time="2025-02-13T15:58:08.559464522Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:08.559687 containerd[1756]: time="2025-02-13T15:58:08.559502483Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:08.560935 containerd[1756]: time="2025-02-13T15:58:08.560888965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:6,}" Feb 13 15:58:08.562464 kubelet[3351]: I0213 15:58:08.562437 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc" Feb 13 15:58:08.564934 containerd[1756]: time="2025-02-13T15:58:08.563501251Z" level=info msg="StopPodSandbox for \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\"" Feb 13 15:58:08.564934 containerd[1756]: time="2025-02-13T15:58:08.563764571Z" level=info msg="Ensure that sandbox 4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc in task-service has been cleanup successfully" Feb 13 15:58:08.564934 containerd[1756]: time="2025-02-13T15:58:08.563962572Z" level=info msg="TearDown network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\" successfully" Feb 13 15:58:08.564934 containerd[1756]: time="2025-02-13T15:58:08.563977012Z" level=info msg="StopPodSandbox for \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\" returns successfully" Feb 13 15:58:08.564934 containerd[1756]: time="2025-02-13T15:58:08.564924534Z" level=info msg="StopPodSandbox for \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\"" Feb 13 15:58:08.565170 containerd[1756]: time="2025-02-13T15:58:08.565038534Z" level=info msg="TearDown network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" successfully" Feb 13 15:58:08.565170 containerd[1756]: time="2025-02-13T15:58:08.565050294Z" level=info msg="StopPodSandbox for \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" returns successfully" Feb 13 15:58:08.566212 containerd[1756]: time="2025-02-13T15:58:08.565791615Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\"" Feb 13 15:58:08.566212 containerd[1756]: time="2025-02-13T15:58:08.565884815Z" level=info msg="TearDown network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" successfully" Feb 13 15:58:08.566212 containerd[1756]: time="2025-02-13T15:58:08.565894615Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" returns successfully" Feb 13 15:58:08.567326 containerd[1756]: time="2025-02-13T15:58:08.566788777Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" Feb 13 15:58:08.567326 containerd[1756]: time="2025-02-13T15:58:08.566899818Z" level=info msg="TearDown network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" successfully" Feb 13 15:58:08.567326 containerd[1756]: time="2025-02-13T15:58:08.566910418Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" returns successfully" Feb 13 15:58:08.568098 containerd[1756]: time="2025-02-13T15:58:08.567979260Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:08.568375 containerd[1756]: time="2025-02-13T15:58:08.568349940Z" level=info msg="TearDown network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" successfully" Feb 13 15:58:08.568375 containerd[1756]: time="2025-02-13T15:58:08.568370740Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" returns successfully" Feb 13 15:58:08.570055 containerd[1756]: time="2025-02-13T15:58:08.569825823Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:08.570055 containerd[1756]: time="2025-02-13T15:58:08.569975104Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:08.570055 containerd[1756]: time="2025-02-13T15:58:08.569993664Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:08.571321 containerd[1756]: time="2025-02-13T15:58:08.570950306Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:08.571321 containerd[1756]: time="2025-02-13T15:58:08.571053946Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:08.571321 containerd[1756]: time="2025-02-13T15:58:08.571063626Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:08.572460 kubelet[3351]: I0213 15:58:08.571759 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2" Feb 13 15:58:08.573087 containerd[1756]: time="2025-02-13T15:58:08.573062230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:7,}" Feb 13 15:58:08.573449 containerd[1756]: time="2025-02-13T15:58:08.573417671Z" level=info msg="StopPodSandbox for \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\"" Feb 13 15:58:08.573682 containerd[1756]: time="2025-02-13T15:58:08.573639631Z" level=info msg="Ensure that sandbox e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2 in task-service has been cleanup successfully" Feb 13 15:58:08.574016 containerd[1756]: time="2025-02-13T15:58:08.573976272Z" level=info msg="TearDown network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\" successfully" Feb 13 15:58:08.574016 containerd[1756]: time="2025-02-13T15:58:08.573999152Z" level=info msg="StopPodSandbox for \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\" returns successfully" Feb 13 15:58:08.574575 containerd[1756]: time="2025-02-13T15:58:08.574398673Z" level=info msg="StopPodSandbox for \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\"" Feb 13 15:58:08.574575 containerd[1756]: time="2025-02-13T15:58:08.574483953Z" level=info msg="TearDown network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" successfully" Feb 13 15:58:08.574575 containerd[1756]: time="2025-02-13T15:58:08.574493073Z" level=info msg="StopPodSandbox for \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" returns successfully" Feb 13 15:58:08.574894 containerd[1756]: time="2025-02-13T15:58:08.574870514Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\"" Feb 13 15:58:08.575073 containerd[1756]: time="2025-02-13T15:58:08.575056154Z" level=info msg="TearDown network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" successfully" Feb 13 15:58:08.575175 containerd[1756]: time="2025-02-13T15:58:08.575161234Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" returns successfully" Feb 13 15:58:08.576679 containerd[1756]: time="2025-02-13T15:58:08.576345637Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" Feb 13 15:58:08.576679 containerd[1756]: time="2025-02-13T15:58:08.576435797Z" level=info msg="TearDown network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" successfully" Feb 13 15:58:08.576679 containerd[1756]: time="2025-02-13T15:58:08.576444957Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" returns successfully" Feb 13 15:58:08.577555 containerd[1756]: time="2025-02-13T15:58:08.577239318Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:08.577555 containerd[1756]: time="2025-02-13T15:58:08.577351199Z" level=info msg="TearDown network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" successfully" Feb 13 15:58:08.577555 containerd[1756]: time="2025-02-13T15:58:08.577361839Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" returns successfully" Feb 13 15:58:08.579197 kubelet[3351]: I0213 15:58:08.578813 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706" Feb 13 15:58:08.579874 containerd[1756]: time="2025-02-13T15:58:08.579823364Z" level=info msg="StopPodSandbox for \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\"" Feb 13 15:58:08.580095 containerd[1756]: time="2025-02-13T15:58:08.580071444Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:08.580240 containerd[1756]: time="2025-02-13T15:58:08.580222284Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:08.580325 containerd[1756]: time="2025-02-13T15:58:08.580286285Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:08.581410 containerd[1756]: time="2025-02-13T15:58:08.580113364Z" level=info msg="Ensure that sandbox e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706 in task-service has been cleanup successfully" Feb 13 15:58:08.581635 containerd[1756]: time="2025-02-13T15:58:08.581607687Z" level=info msg="TearDown network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\" successfully" Feb 13 15:58:08.581690 containerd[1756]: time="2025-02-13T15:58:08.581628687Z" level=info msg="StopPodSandbox for \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\" returns successfully" Feb 13 15:58:08.582762 containerd[1756]: time="2025-02-13T15:58:08.582338449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:58:08.582762 containerd[1756]: time="2025-02-13T15:58:08.582452969Z" level=info msg="StopPodSandbox for \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\"" Feb 13 15:58:08.582762 containerd[1756]: time="2025-02-13T15:58:08.582539169Z" level=info msg="TearDown network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" successfully" Feb 13 15:58:08.582762 containerd[1756]: time="2025-02-13T15:58:08.582551049Z" level=info msg="StopPodSandbox for \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" returns successfully" Feb 13 15:58:08.583543 containerd[1756]: time="2025-02-13T15:58:08.583519451Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\"" Feb 13 15:58:08.583721 containerd[1756]: time="2025-02-13T15:58:08.583679851Z" level=info msg="TearDown network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" successfully" Feb 13 15:58:08.584419 containerd[1756]: time="2025-02-13T15:58:08.584357893Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" returns successfully" Feb 13 15:58:08.586070 containerd[1756]: time="2025-02-13T15:58:08.584885334Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" Feb 13 15:58:08.586070 containerd[1756]: time="2025-02-13T15:58:08.584971974Z" level=info msg="TearDown network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" successfully" Feb 13 15:58:08.586070 containerd[1756]: time="2025-02-13T15:58:08.584981894Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" returns successfully" Feb 13 15:58:08.586283 kubelet[3351]: I0213 15:58:08.586246 3351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388" Feb 13 15:58:08.587251 containerd[1756]: time="2025-02-13T15:58:08.587120818Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:08.587251 containerd[1756]: time="2025-02-13T15:58:08.587227299Z" level=info msg="TearDown network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" successfully" Feb 13 15:58:08.587251 containerd[1756]: time="2025-02-13T15:58:08.587237699Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" returns successfully" Feb 13 15:58:08.588153 containerd[1756]: time="2025-02-13T15:58:08.588036060Z" level=info msg="StopPodSandbox for \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\"" Feb 13 15:58:08.588549 containerd[1756]: time="2025-02-13T15:58:08.588515381Z" level=info msg="Ensure that sandbox 86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388 in task-service has been cleanup successfully" Feb 13 15:58:08.589503 containerd[1756]: time="2025-02-13T15:58:08.588946102Z" level=info msg="TearDown network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\" successfully" Feb 13 15:58:08.589503 containerd[1756]: time="2025-02-13T15:58:08.588963182Z" level=info msg="StopPodSandbox for \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\" returns successfully" Feb 13 15:58:08.589503 containerd[1756]: time="2025-02-13T15:58:08.589048822Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:08.589503 containerd[1756]: time="2025-02-13T15:58:08.589122422Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:08.589503 containerd[1756]: time="2025-02-13T15:58:08.589130983Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:08.590263 containerd[1756]: time="2025-02-13T15:58:08.589768424Z" level=info msg="StopPodSandbox for \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\"" Feb 13 15:58:08.590263 containerd[1756]: time="2025-02-13T15:58:08.589864384Z" level=info msg="TearDown network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" successfully" Feb 13 15:58:08.590263 containerd[1756]: time="2025-02-13T15:58:08.589875064Z" level=info msg="StopPodSandbox for \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" returns successfully" Feb 13 15:58:08.590263 containerd[1756]: time="2025-02-13T15:58:08.589965264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:6,}" Feb 13 15:58:08.590644 containerd[1756]: time="2025-02-13T15:58:08.590552505Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\"" Feb 13 15:58:08.590893 containerd[1756]: time="2025-02-13T15:58:08.590865426Z" level=info msg="TearDown network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" successfully" Feb 13 15:58:08.590942 containerd[1756]: time="2025-02-13T15:58:08.590893226Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" returns successfully" Feb 13 15:58:08.591680 containerd[1756]: time="2025-02-13T15:58:08.591551267Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" Feb 13 15:58:08.591680 containerd[1756]: time="2025-02-13T15:58:08.591637908Z" level=info msg="TearDown network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" successfully" Feb 13 15:58:08.591680 containerd[1756]: time="2025-02-13T15:58:08.591647148Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" returns successfully" Feb 13 15:58:08.596136 containerd[1756]: time="2025-02-13T15:58:08.594605354Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:08.596136 containerd[1756]: time="2025-02-13T15:58:08.594706194Z" level=info msg="TearDown network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" successfully" Feb 13 15:58:08.596136 containerd[1756]: time="2025-02-13T15:58:08.594716874Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" returns successfully" Feb 13 15:58:08.596136 containerd[1756]: time="2025-02-13T15:58:08.595202235Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:08.596136 containerd[1756]: time="2025-02-13T15:58:08.595410755Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:08.596136 containerd[1756]: time="2025-02-13T15:58:08.595425435Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:08.600321 containerd[1756]: time="2025-02-13T15:58:08.598151801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:6,}" Feb 13 15:58:08.876520 systemd[1]: run-netns-cni\x2dd5248429\x2d419e\x2dd235\x2db8f4\x2d18e2d8dd69b0.mount: Deactivated successfully. Feb 13 15:58:08.876628 systemd[1]: run-netns-cni\x2d3841471e\x2df849\x2dc5b7\x2da253\x2d05ec018de90e.mount: Deactivated successfully. Feb 13 15:58:08.876678 systemd[1]: run-netns-cni\x2d66e7a76b\x2dc65b\x2d7c62\x2d7ff6\x2db505fa03a63c.mount: Deactivated successfully. Feb 13 15:58:08.876724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706-shm.mount: Deactivated successfully. Feb 13 15:58:08.876777 systemd[1]: run-netns-cni\x2daf7ce9dc\x2d80b7\x2dce3c\x2dd6fc\x2dc8af1058ce5e.mount: Deactivated successfully. Feb 13 15:58:08.876820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388-shm.mount: Deactivated successfully. Feb 13 15:58:08.876867 systemd[1]: run-netns-cni\x2dbbabb0f8\x2dbb95\x2d860b\x2de253\x2d65fc19a3165d.mount: Deactivated successfully. Feb 13 15:58:08.876909 systemd[1]: run-netns-cni\x2d79d45c7a\x2dc090\x2dfd8a\x2df2eb\x2d283213311050.mount: Deactivated successfully. Feb 13 15:58:08.876952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444671075.mount: Deactivated successfully. Feb 13 15:58:09.276241 systemd-networkd[1623]: calif0bb0148eb1: Link UP Feb 13 15:58:09.280445 systemd-networkd[1623]: calif0bb0148eb1: Gained carrier Feb 13 15:58:09.282382 systemd-networkd[1623]: calif9122931ad3: Link UP Feb 13 15:58:09.282561 systemd-networkd[1623]: calif9122931ad3: Gained carrier Feb 13 15:58:09.287152 systemd-networkd[1623]: cali4e57b84417e: Link UP Feb 13 15:58:09.288481 systemd-networkd[1623]: cali4e57b84417e: Gained carrier Feb 13 15:58:09.295200 systemd-networkd[1623]: cali31792cf2371: Link UP Feb 13 15:58:09.295865 systemd-networkd[1623]: cali31792cf2371: Gained carrier Feb 13 15:58:09.332916 kubelet[3351]: I0213 15:58:09.332879 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-nmj8t" podStartSLOduration=1.740444379 podStartE2EDuration="18.332831887s" podCreationTimestamp="2025-02-13 15:57:51 +0000 UTC" firstStartedPulling="2025-02-13 15:57:51.571334334 +0000 UTC m=+23.556624916" lastFinishedPulling="2025-02-13 15:58:08.163721842 +0000 UTC m=+40.149012424" observedRunningTime="2025-02-13 15:58:08.628488022 +0000 UTC m=+40.613778604" watchObservedRunningTime="2025-02-13 15:58:09.332831887 +0000 UTC m=+41.318122469" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:08.872 [INFO][5341] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:08.997 [INFO][5341] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0 calico-kube-controllers-684dd8d987- calico-system 17906d2c-fcbc-4df1-8ac2-176024c123e0 719 0 2025-02-13 15:57:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:684dd8d987 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152.2.1-a-d82c5cac77 calico-kube-controllers-684dd8d987-7vztn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4e57b84417e [] []}} ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:08.997 [INFO][5341] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.069 [INFO][5399] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" HandleID="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.096 [INFO][5399] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" HandleID="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.1-a-d82c5cac77", "pod":"calico-kube-controllers-684dd8d987-7vztn", "timestamp":"2025-02-13 15:58:09.069795435 +0000 UTC"}, Hostname:"ci-4152.2.1-a-d82c5cac77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.096 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.196 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.196 [INFO][5399] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-d82c5cac77' Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.199 [INFO][5399] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.204 [INFO][5399] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.209 [INFO][5399] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.211 [INFO][5399] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.214 [INFO][5399] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.214 [INFO][5399] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.216 [INFO][5399] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9 Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.221 [INFO][5399] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.247 [INFO][5399] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.196/26] block=192.168.53.192/26 handle="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.247 [INFO][5399] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.196/26] handle="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.247 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:58:09.342481 containerd[1756]: 2025-02-13 15:58:09.247 [INFO][5399] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.196/26] IPv6=[] ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" HandleID="k8s-pod-network.77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" Feb 13 15:58:09.343099 containerd[1756]: 2025-02-13 15:58:09.249 [INFO][5341] cni-plugin/k8s.go 386: Populated endpoint ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0", GenerateName:"calico-kube-controllers-684dd8d987-", Namespace:"calico-system", SelfLink:"", UID:"17906d2c-fcbc-4df1-8ac2-176024c123e0", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684dd8d987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"", Pod:"calico-kube-controllers-684dd8d987-7vztn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e57b84417e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.343099 containerd[1756]: 2025-02-13 15:58:09.249 [INFO][5341] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.196/32] ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" Feb 13 15:58:09.343099 containerd[1756]: 2025-02-13 15:58:09.249 [INFO][5341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e57b84417e ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" Feb 13 15:58:09.343099 containerd[1756]: 2025-02-13 15:58:09.288 [INFO][5341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" Feb 13 15:58:09.343099 containerd[1756]: 2025-02-13 15:58:09.292 [INFO][5341] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0", GenerateName:"calico-kube-controllers-684dd8d987-", Namespace:"calico-system", SelfLink:"", UID:"17906d2c-fcbc-4df1-8ac2-176024c123e0", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684dd8d987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9", Pod:"calico-kube-controllers-684dd8d987-7vztn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e57b84417e", MAC:"42:8b:e4:4c:03:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.343099 containerd[1756]: 2025-02-13 15:58:09.320 [INFO][5341] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9" Namespace="calico-system" Pod="calico-kube-controllers-684dd8d987-7vztn" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--kube--controllers--684dd8d987--7vztn-eth0" Feb 13 15:58:09.343099 containerd[1756]: 2025-02-13 15:58:08.886 [INFO][5360] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:08.998 [INFO][5360] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0 calico-apiserver-5885744d75- calico-apiserver 38b25543-f3d8-4325-8684-f120f4c5229a 718 0 2025-02-13 15:57:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5885744d75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.1-a-d82c5cac77 calico-apiserver-5885744d75-ph4x5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0bb0148eb1 [] []}} ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:08.998 [INFO][5360] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.074 [INFO][5406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" HandleID="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.094 [INFO][5406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" HandleID="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004caa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.1-a-d82c5cac77", "pod":"calico-apiserver-5885744d75-ph4x5", "timestamp":"2025-02-13 15:58:09.073273522 +0000 UTC"}, Hostname:"ci-4152.2.1-a-d82c5cac77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.094 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.163 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.163 [INFO][5406] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-d82c5cac77' Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.165 [INFO][5406] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.171 [INFO][5406] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.176 [INFO][5406] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.178 [INFO][5406] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.180 [INFO][5406] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.180 [INFO][5406] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.182 [INFO][5406] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71 Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.189 [INFO][5406] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.196 [INFO][5406] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.195/26] block=192.168.53.192/26 handle="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.196 [INFO][5406] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.195/26] handle="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.196 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:58:09.343333 containerd[1756]: 2025-02-13 15:58:09.196 [INFO][5406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.195/26] IPv6=[] ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" HandleID="k8s-pod-network.e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" Feb 13 15:58:09.343734 containerd[1756]: 2025-02-13 15:58:09.199 [INFO][5360] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0", GenerateName:"calico-apiserver-5885744d75-", Namespace:"calico-apiserver", SelfLink:"", UID:"38b25543-f3d8-4325-8684-f120f4c5229a", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5885744d75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"", Pod:"calico-apiserver-5885744d75-ph4x5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0bb0148eb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.343734 containerd[1756]: 2025-02-13 15:58:09.199 [INFO][5360] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.195/32] ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" Feb 13 15:58:09.343734 containerd[1756]: 2025-02-13 15:58:09.199 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0bb0148eb1 ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" Feb 13 15:58:09.343734 containerd[1756]: 2025-02-13 15:58:09.276 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" Feb 13 15:58:09.343734 containerd[1756]: 2025-02-13 15:58:09.279 [INFO][5360] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0", GenerateName:"calico-apiserver-5885744d75-", Namespace:"calico-apiserver", SelfLink:"", UID:"38b25543-f3d8-4325-8684-f120f4c5229a", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5885744d75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71", Pod:"calico-apiserver-5885744d75-ph4x5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0bb0148eb1", MAC:"b6:9f:8c:6a:de:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.343734 containerd[1756]: 2025-02-13 15:58:09.323 [INFO][5360] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-ph4x5" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--ph4x5-eth0" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:08.941 [INFO][5384] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:08.996 [INFO][5384] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0 calico-apiserver-5885744d75- calico-apiserver 4713ae5a-4a4f-4494-8aa7-cdc51f64b486 720 0 2025-02-13 15:57:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5885744d75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.1-a-d82c5cac77 calico-apiserver-5885744d75-9dqxt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif9122931ad3 [] []}} ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:08.996 [INFO][5384] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.068 [INFO][5397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" HandleID="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.093 [INFO][5397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" HandleID="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fb950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.1-a-d82c5cac77", "pod":"calico-apiserver-5885744d75-9dqxt", "timestamp":"2025-02-13 15:58:09.068829633 +0000 UTC"}, Hostname:"ci-4152.2.1-a-d82c5cac77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.093 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.122 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.122 [INFO][5397] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-d82c5cac77' Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.130 [INFO][5397] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.136 [INFO][5397] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.141 [INFO][5397] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.143 [INFO][5397] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.146 [INFO][5397] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.146 [INFO][5397] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.148 [INFO][5397] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234 Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.152 [INFO][5397] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.162 [INFO][5397] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.194/26] block=192.168.53.192/26 handle="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.162 [INFO][5397] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.194/26] handle="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.162 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:58:09.356278 containerd[1756]: 2025-02-13 15:58:09.162 [INFO][5397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.194/26] IPv6=[] ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" HandleID="k8s-pod-network.a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Workload="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" Feb 13 15:58:09.357192 containerd[1756]: 2025-02-13 15:58:09.166 [INFO][5384] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0", GenerateName:"calico-apiserver-5885744d75-", Namespace:"calico-apiserver", SelfLink:"", UID:"4713ae5a-4a4f-4494-8aa7-cdc51f64b486", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5885744d75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"", Pod:"calico-apiserver-5885744d75-9dqxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9122931ad3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.357192 containerd[1756]: 2025-02-13 15:58:09.166 [INFO][5384] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.194/32] ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" Feb 13 15:58:09.357192 containerd[1756]: 2025-02-13 15:58:09.166 [INFO][5384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9122931ad3 ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" Feb 13 15:58:09.357192 containerd[1756]: 2025-02-13 15:58:09.285 [INFO][5384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" Feb 13 15:58:09.357192 containerd[1756]: 2025-02-13 15:58:09.290 [INFO][5384] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0", GenerateName:"calico-apiserver-5885744d75-", Namespace:"calico-apiserver", SelfLink:"", UID:"4713ae5a-4a4f-4494-8aa7-cdc51f64b486", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5885744d75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234", Pod:"calico-apiserver-5885744d75-9dqxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9122931ad3", MAC:"4e:eb:e4:e9:12:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.357192 containerd[1756]: 2025-02-13 15:58:09.347 [INFO][5384] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234" Namespace="calico-apiserver" Pod="calico-apiserver-5885744d75-9dqxt" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-calico--apiserver--5885744d75--9dqxt-eth0" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:08.870 [INFO][5350] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:08.996 [INFO][5350] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0 csi-node-driver- calico-system 2d8778e0-23a8-47a6-b01b-5b701fc009d0 635 0 2025-02-13 15:57:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152.2.1-a-d82c5cac77 csi-node-driver-qbpxc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali31792cf2371 [] []}} ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:08.996 [INFO][5350] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.059 [INFO][5398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" HandleID="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Workload="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.084 [INFO][5398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" HandleID="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Workload="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.1-a-d82c5cac77", "pod":"csi-node-driver-qbpxc", "timestamp":"2025-02-13 15:58:09.059783215 +0000 UTC"}, Hostname:"ci-4152.2.1-a-d82c5cac77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.084 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.085 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.085 [INFO][5398] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-d82c5cac77' Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.089 [INFO][5398] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.098 [INFO][5398] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.102 [INFO][5398] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.103 [INFO][5398] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.106 [INFO][5398] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.106 [INFO][5398] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.107 [INFO][5398] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54 Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.115 [INFO][5398] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.122 [INFO][5398] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.193/26] block=192.168.53.192/26 handle="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.122 [INFO][5398] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.193/26] handle="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.122 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:58:09.358039 containerd[1756]: 2025-02-13 15:58:09.122 [INFO][5398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.193/26] IPv6=[] ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" HandleID="k8s-pod-network.6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Workload="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" Feb 13 15:58:09.358550 containerd[1756]: 2025-02-13 15:58:09.125 [INFO][5350] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d8778e0-23a8-47a6-b01b-5b701fc009d0", ResourceVersion:"635", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"", Pod:"csi-node-driver-qbpxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31792cf2371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.358550 containerd[1756]: 2025-02-13 15:58:09.125 [INFO][5350] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.193/32] ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" Feb 13 15:58:09.358550 containerd[1756]: 2025-02-13 15:58:09.125 [INFO][5350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31792cf2371 ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" Feb 13 15:58:09.358550 containerd[1756]: 2025-02-13 15:58:09.294 [INFO][5350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" Feb 13 15:58:09.358550 containerd[1756]: 2025-02-13 15:58:09.303 [INFO][5350] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d8778e0-23a8-47a6-b01b-5b701fc009d0", ResourceVersion:"635", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54", Pod:"csi-node-driver-qbpxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31792cf2371", MAC:"4e:46:e5:83:fa:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.358550 containerd[1756]: 2025-02-13 15:58:09.346 [INFO][5350] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54" Namespace="calico-system" Pod="csi-node-driver-qbpxc" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-csi--node--driver--qbpxc-eth0" Feb 13 15:58:09.408909 containerd[1756]: time="2025-02-13T15:58:09.407683479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:09.408909 containerd[1756]: time="2025-02-13T15:58:09.407742839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:09.408909 containerd[1756]: time="2025-02-13T15:58:09.407844359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.408909 containerd[1756]: time="2025-02-13T15:58:09.407951319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.430170 containerd[1756]: time="2025-02-13T15:58:09.427440199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:09.432687 containerd[1756]: time="2025-02-13T15:58:09.432473569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:09.432687 containerd[1756]: time="2025-02-13T15:58:09.432547049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.433274 containerd[1756]: time="2025-02-13T15:58:09.432698129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.485537 systemd[1]: Started cri-containerd-77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9.scope - libcontainer container 77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9. Feb 13 15:58:09.490176 containerd[1756]: time="2025-02-13T15:58:09.489605484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:09.490751 containerd[1756]: time="2025-02-13T15:58:09.490253206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:09.490751 containerd[1756]: time="2025-02-13T15:58:09.490284566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.492551 containerd[1756]: time="2025-02-13T15:58:09.491928729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.514151 containerd[1756]: time="2025-02-13T15:58:09.496838499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:09.514151 containerd[1756]: time="2025-02-13T15:58:09.496941939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:09.514151 containerd[1756]: time="2025-02-13T15:58:09.496956379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.514151 containerd[1756]: time="2025-02-13T15:58:09.497048739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.518545 systemd[1]: Started cri-containerd-e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71.scope - libcontainer container e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71. Feb 13 15:58:09.533086 systemd[1]: Started cri-containerd-6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54.scope - libcontainer container 6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54. Feb 13 15:58:09.586650 systemd[1]: Started cri-containerd-a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234.scope - libcontainer container a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234. Feb 13 15:58:09.648551 containerd[1756]: time="2025-02-13T15:58:09.648502046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbpxc,Uid:2d8778e0-23a8-47a6-b01b-5b701fc009d0,Namespace:calico-system,Attempt:6,} returns sandbox id \"6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54\"" Feb 13 15:58:09.655048 containerd[1756]: time="2025-02-13T15:58:09.654515458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:58:09.692761 systemd-networkd[1623]: cali544d3a474e7: Link UP Feb 13 15:58:09.695929 systemd-networkd[1623]: cali544d3a474e7: Gained carrier Feb 13 15:58:09.746326 containerd[1756]: time="2025-02-13T15:58:09.745229882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-ph4x5,Uid:38b25543-f3d8-4325-8684-f120f4c5229a,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71\"" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.372 [INFO][5428] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.398 [INFO][5428] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0 coredns-76f75df574- kube-system 7dd69579-7ca4-4802-a6f8-37ab66ddcef1 715 0 2025-02-13 15:57:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.1-a-d82c5cac77 coredns-76f75df574-xz2g6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali544d3a474e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.399 [INFO][5428] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.547 [INFO][5540] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" HandleID="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Workload="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.589 [INFO][5540] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" HandleID="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Workload="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039d810), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.1-a-d82c5cac77", "pod":"coredns-76f75df574-xz2g6", "timestamp":"2025-02-13 15:58:09.547194361 +0000 UTC"}, Hostname:"ci-4152.2.1-a-d82c5cac77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.589 [INFO][5540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.589 [INFO][5540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.589 [INFO][5540] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-d82c5cac77' Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.596 [INFO][5540] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.616 [INFO][5540] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.626 [INFO][5540] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.631 [INFO][5540] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.636 [INFO][5540] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.637 [INFO][5540] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.640 [INFO][5540] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.653 [INFO][5540] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.671 [INFO][5540] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.197/26] block=192.168.53.192/26 handle="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.671 [INFO][5540] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.197/26] handle="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.671 [INFO][5540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:58:09.749899 containerd[1756]: 2025-02-13 15:58:09.671 [INFO][5540] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.197/26] IPv6=[] ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" HandleID="k8s-pod-network.9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Workload="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" Feb 13 15:58:09.750531 containerd[1756]: 2025-02-13 15:58:09.675 [INFO][5428] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7dd69579-7ca4-4802-a6f8-37ab66ddcef1", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"", Pod:"coredns-76f75df574-xz2g6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali544d3a474e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.750531 containerd[1756]: 2025-02-13 15:58:09.680 [INFO][5428] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.197/32] ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" Feb 13 15:58:09.750531 containerd[1756]: 2025-02-13 15:58:09.681 [INFO][5428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali544d3a474e7 ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" Feb 13 15:58:09.750531 containerd[1756]: 2025-02-13 15:58:09.699 [INFO][5428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" Feb 13 15:58:09.750531 containerd[1756]: 2025-02-13 15:58:09.702 [INFO][5428] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7dd69579-7ca4-4802-a6f8-37ab66ddcef1", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a", Pod:"coredns-76f75df574-xz2g6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali544d3a474e7", MAC:"b6:81:74:ad:89:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.750531 containerd[1756]: 2025-02-13 15:58:09.726 [INFO][5428] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a" Namespace="kube-system" Pod="coredns-76f75df574-xz2g6" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--xz2g6-eth0" Feb 13 15:58:09.767349 containerd[1756]: time="2025-02-13T15:58:09.763969000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd8d987-7vztn,Uid:17906d2c-fcbc-4df1-8ac2-176024c123e0,Namespace:calico-system,Attempt:6,} returns sandbox id \"77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9\"" Feb 13 15:58:09.774533 systemd-networkd[1623]: cali030e4e28458: Link UP Feb 13 15:58:09.776592 systemd-networkd[1623]: cali030e4e28458: Gained carrier Feb 13 15:58:09.791954 containerd[1756]: time="2025-02-13T15:58:09.791538855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5885744d75-9dqxt,Uid:4713ae5a-4a4f-4494-8aa7-cdc51f64b486,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234\"" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.394 [INFO][5438] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.447 [INFO][5438] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0 coredns-76f75df574- kube-system a37d0509-8180-495d-aac5-1e394b4d33c7 721 0 2025-02-13 15:57:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.1-a-d82c5cac77 coredns-76f75df574-j7w87 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali030e4e28458 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.447 [INFO][5438] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.577 [INFO][5580] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" HandleID="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Workload="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.620 [INFO][5580] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" HandleID="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Workload="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000245b30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.1-a-d82c5cac77", "pod":"coredns-76f75df574-j7w87", "timestamp":"2025-02-13 15:58:09.577377462 +0000 UTC"}, Hostname:"ci-4152.2.1-a-d82c5cac77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.621 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.671 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.672 [INFO][5580] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-d82c5cac77' Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.677 [INFO][5580] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.695 [INFO][5580] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.709 [INFO][5580] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.716 [INFO][5580] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.725 [INFO][5580] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.725 [INFO][5580] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.731 [INFO][5580] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26 Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.739 [INFO][5580] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.753 [INFO][5580] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.198/26] block=192.168.53.192/26 handle="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.753 [INFO][5580] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.198/26] handle="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" host="ci-4152.2.1-a-d82c5cac77" Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.753 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:58:09.805886 containerd[1756]: 2025-02-13 15:58:09.753 [INFO][5580] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.198/26] IPv6=[] ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" HandleID="k8s-pod-network.065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Workload="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" Feb 13 15:58:09.806620 containerd[1756]: 2025-02-13 15:58:09.762 [INFO][5438] cni-plugin/k8s.go 386: Populated endpoint ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a37d0509-8180-495d-aac5-1e394b4d33c7", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"", Pod:"coredns-76f75df574-j7w87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali030e4e28458", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.806620 containerd[1756]: 2025-02-13 15:58:09.764 [INFO][5438] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.198/32] ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" Feb 13 15:58:09.806620 containerd[1756]: 2025-02-13 15:58:09.766 [INFO][5438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali030e4e28458 ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" Feb 13 15:58:09.806620 containerd[1756]: 2025-02-13 15:58:09.771 [INFO][5438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" Feb 13 15:58:09.806620 containerd[1756]: 2025-02-13 15:58:09.771 [INFO][5438] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a37d0509-8180-495d-aac5-1e394b4d33c7", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-d82c5cac77", ContainerID:"065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26", Pod:"coredns-76f75df574-j7w87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali030e4e28458", MAC:"02:e0:a6:03:67:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:58:09.806620 containerd[1756]: 2025-02-13 15:58:09.798 [INFO][5438] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26" Namespace="kube-system" Pod="coredns-76f75df574-j7w87" WorkloadEndpoint="ci--4152.2.1--a--d82c5cac77-k8s-coredns--76f75df574--j7w87-eth0" Feb 13 15:58:09.837275 containerd[1756]: time="2025-02-13T15:58:09.836764267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:09.837275 containerd[1756]: time="2025-02-13T15:58:09.836829947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:09.837275 containerd[1756]: time="2025-02-13T15:58:09.836845627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.837275 containerd[1756]: time="2025-02-13T15:58:09.836958427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.859096 containerd[1756]: time="2025-02-13T15:58:09.858477791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:09.859096 containerd[1756]: time="2025-02-13T15:58:09.858550951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:09.859096 containerd[1756]: time="2025-02-13T15:58:09.858563231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.859096 containerd[1756]: time="2025-02-13T15:58:09.858640671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:09.861537 systemd[1]: Started cri-containerd-9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a.scope - libcontainer container 9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a. Feb 13 15:58:09.884513 systemd[1]: Started cri-containerd-065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26.scope - libcontainer container 065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26. Feb 13 15:58:09.917148 containerd[1756]: time="2025-02-13T15:58:09.917065149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xz2g6,Uid:7dd69579-7ca4-4802-a6f8-37ab66ddcef1,Namespace:kube-system,Attempt:6,} returns sandbox id \"9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a\"" Feb 13 15:58:09.928067 containerd[1756]: time="2025-02-13T15:58:09.927977251Z" level=info msg="CreateContainer within sandbox \"9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:58:09.940989 containerd[1756]: time="2025-02-13T15:58:09.940938238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j7w87,Uid:a37d0509-8180-495d-aac5-1e394b4d33c7,Namespace:kube-system,Attempt:6,} returns sandbox id \"065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26\"" Feb 13 15:58:09.947892 containerd[1756]: time="2025-02-13T15:58:09.947832652Z" level=info msg="CreateContainer within sandbox \"065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:58:10.022974 containerd[1756]: time="2025-02-13T15:58:10.022919363Z" level=info msg="CreateContainer within sandbox \"9ad0f978118a1c9512da33c2da77c8c9ff6735640352f47888550410484ec72a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"275e50dd4b55ed5f34678e1ef9fd86019305187d63b932b891f1d51dd3603cb4\"" Feb 13 15:58:10.023680 containerd[1756]: time="2025-02-13T15:58:10.023623525Z" level=info msg="StartContainer for \"275e50dd4b55ed5f34678e1ef9fd86019305187d63b932b891f1d51dd3603cb4\"" Feb 13 15:58:10.036492 containerd[1756]: time="2025-02-13T15:58:10.036436911Z" level=info msg="CreateContainer within sandbox \"065153bc6b95c5b43bd49b6b715f56009ea8b33344e60af97ba2c152573fda26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"065ab058f809d73310035e6d5da9b652aebde805ce000d1eb14c9aa0d515fe51\"" Feb 13 15:58:10.039610 containerd[1756]: time="2025-02-13T15:58:10.039560437Z" level=info msg="StartContainer for \"065ab058f809d73310035e6d5da9b652aebde805ce000d1eb14c9aa0d515fe51\"" Feb 13 15:58:10.095761 systemd[1]: Started cri-containerd-275e50dd4b55ed5f34678e1ef9fd86019305187d63b932b891f1d51dd3603cb4.scope - libcontainer container 275e50dd4b55ed5f34678e1ef9fd86019305187d63b932b891f1d51dd3603cb4. Feb 13 15:58:10.114510 systemd[1]: Started cri-containerd-065ab058f809d73310035e6d5da9b652aebde805ce000d1eb14c9aa0d515fe51.scope - libcontainer container 065ab058f809d73310035e6d5da9b652aebde805ce000d1eb14c9aa0d515fe51. Feb 13 15:58:10.174837 containerd[1756]: time="2025-02-13T15:58:10.174598150Z" level=info msg="StartContainer for \"275e50dd4b55ed5f34678e1ef9fd86019305187d63b932b891f1d51dd3603cb4\" returns successfully" Feb 13 15:58:10.186522 containerd[1756]: time="2025-02-13T15:58:10.186456774Z" level=info msg="StartContainer for \"065ab058f809d73310035e6d5da9b652aebde805ce000d1eb14c9aa0d515fe51\" returns successfully" Feb 13 15:58:10.427334 kernel: bpftool[5992]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:58:10.463479 systemd-networkd[1623]: calif0bb0148eb1: Gained IPv6LL Feb 13 15:58:10.600396 systemd-networkd[1623]: vxlan.calico: Link UP Feb 13 15:58:10.600407 systemd-networkd[1623]: vxlan.calico: Gained carrier Feb 13 15:58:10.656834 kubelet[3351]: I0213 15:58:10.656547 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-j7w87" podStartSLOduration=28.656500245 podStartE2EDuration="28.656500245s" podCreationTimestamp="2025-02-13 15:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:58:10.653411759 +0000 UTC m=+42.638702341" watchObservedRunningTime="2025-02-13 15:58:10.656500245 +0000 UTC m=+42.641790827" Feb 13 15:58:10.657722 systemd-networkd[1623]: calif9122931ad3: Gained IPv6LL Feb 13 15:58:10.691529 kubelet[3351]: I0213 15:58:10.691417 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xz2g6" podStartSLOduration=28.691367556 podStartE2EDuration="28.691367556s" podCreationTimestamp="2025-02-13 15:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:58:10.687836069 +0000 UTC m=+42.673126651" watchObservedRunningTime="2025-02-13 15:58:10.691367556 +0000 UTC m=+42.676658138" Feb 13 15:58:10.719520 systemd-networkd[1623]: cali31792cf2371: Gained IPv6LL Feb 13 15:58:10.720277 systemd-networkd[1623]: cali4e57b84417e: Gained IPv6LL Feb 13 15:58:11.039490 systemd-networkd[1623]: cali030e4e28458: Gained IPv6LL Feb 13 15:58:11.351538 containerd[1756]: time="2025-02-13T15:58:11.351410251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:11.354825 containerd[1756]: time="2025-02-13T15:58:11.354600018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:58:11.359885 containerd[1756]: time="2025-02-13T15:58:11.359812788Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:11.365392 containerd[1756]: time="2025-02-13T15:58:11.365280359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:11.373744 containerd[1756]: time="2025-02-13T15:58:11.373613776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.718784557s" Feb 13 15:58:11.373744 containerd[1756]: time="2025-02-13T15:58:11.373655936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:58:11.375842 containerd[1756]: time="2025-02-13T15:58:11.374673058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:58:11.376263 containerd[1756]: time="2025-02-13T15:58:11.375679780Z" level=info msg="CreateContainer within sandbox \"6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:58:11.427706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368723493.mount: Deactivated successfully. Feb 13 15:58:11.441907 containerd[1756]: time="2025-02-13T15:58:11.441859194Z" level=info msg="CreateContainer within sandbox \"6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"707fdc424c4c3ef90cbc2fffd9ada3c9c7576dfaef9b994bf5e0bec441993740\"" Feb 13 15:58:11.442507 containerd[1756]: time="2025-02-13T15:58:11.442428756Z" level=info msg="StartContainer for \"707fdc424c4c3ef90cbc2fffd9ada3c9c7576dfaef9b994bf5e0bec441993740\"" Feb 13 15:58:11.485528 systemd[1]: Started cri-containerd-707fdc424c4c3ef90cbc2fffd9ada3c9c7576dfaef9b994bf5e0bec441993740.scope - libcontainer container 707fdc424c4c3ef90cbc2fffd9ada3c9c7576dfaef9b994bf5e0bec441993740. Feb 13 15:58:11.519707 containerd[1756]: time="2025-02-13T15:58:11.519182271Z" level=info msg="StartContainer for \"707fdc424c4c3ef90cbc2fffd9ada3c9c7576dfaef9b994bf5e0bec441993740\" returns successfully" Feb 13 15:58:11.615475 systemd-networkd[1623]: cali544d3a474e7: Gained IPv6LL Feb 13 15:58:12.383458 systemd-networkd[1623]: vxlan.calico: Gained IPv6LL Feb 13 15:58:15.908368 containerd[1756]: time="2025-02-13T15:58:15.908118449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:15.911240 containerd[1756]: time="2025-02-13T15:58:15.911139582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:58:15.914372 containerd[1756]: time="2025-02-13T15:58:15.914312956Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:15.920977 containerd[1756]: time="2025-02-13T15:58:15.920529463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:15.921381 containerd[1756]: time="2025-02-13T15:58:15.921346226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.545327245s" Feb 13 15:58:15.921479 containerd[1756]: time="2025-02-13T15:58:15.921464987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:58:15.923699 containerd[1756]: time="2025-02-13T15:58:15.923064554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:58:15.928462 containerd[1756]: time="2025-02-13T15:58:15.928348696Z" level=info msg="CreateContainer within sandbox \"e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:58:15.977928 containerd[1756]: time="2025-02-13T15:58:15.977875830Z" level=info msg="CreateContainer within sandbox \"e86128de9be564b0ff3a70a7759ec615623cd878b8855456c0f455b67ec50f71\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"683076c35075cd9478a2b616c339d43110eff87cc8aea0b53ca1e505a261f05e\"" Feb 13 15:58:15.979212 containerd[1756]: time="2025-02-13T15:58:15.978476273Z" level=info msg="StartContainer for \"683076c35075cd9478a2b616c339d43110eff87cc8aea0b53ca1e505a261f05e\"" Feb 13 15:58:16.008493 systemd[1]: Started cri-containerd-683076c35075cd9478a2b616c339d43110eff87cc8aea0b53ca1e505a261f05e.scope - libcontainer container 683076c35075cd9478a2b616c339d43110eff87cc8aea0b53ca1e505a261f05e. Feb 13 15:58:16.043644 containerd[1756]: time="2025-02-13T15:58:16.043557714Z" level=info msg="StartContainer for \"683076c35075cd9478a2b616c339d43110eff87cc8aea0b53ca1e505a261f05e\" returns successfully" Feb 13 15:58:16.905945 kubelet[3351]: I0213 15:58:16.905886 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5885744d75-ph4x5" podStartSLOduration=21.749496248 podStartE2EDuration="27.905838315s" podCreationTimestamp="2025-02-13 15:57:49 +0000 UTC" firstStartedPulling="2025-02-13 15:58:09.765844683 +0000 UTC m=+41.751135225" lastFinishedPulling="2025-02-13 15:58:15.92218675 +0000 UTC m=+47.907477292" observedRunningTime="2025-02-13 15:58:16.902611421 +0000 UTC m=+48.887902003" watchObservedRunningTime="2025-02-13 15:58:16.905838315 +0000 UTC m=+48.891128857" Feb 13 15:58:17.889846 kubelet[3351]: I0213 15:58:17.889766 3351 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:58:18.089546 containerd[1756]: time="2025-02-13T15:58:18.089484245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:18.092782 containerd[1756]: time="2025-02-13T15:58:18.092706772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:58:18.096997 containerd[1756]: time="2025-02-13T15:58:18.096927100Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:18.102228 containerd[1756]: time="2025-02-13T15:58:18.102159311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:18.103409 containerd[1756]: time="2025-02-13T15:58:18.102854792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.179751198s" Feb 13 15:58:18.103409 containerd[1756]: time="2025-02-13T15:58:18.102892152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:58:18.104269 containerd[1756]: time="2025-02-13T15:58:18.104238395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:58:18.128449 containerd[1756]: time="2025-02-13T15:58:18.127375721Z" level=info msg="CreateContainer within sandbox \"77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:58:18.168679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784682122.mount: Deactivated successfully. Feb 13 15:58:18.178849 containerd[1756]: time="2025-02-13T15:58:18.178794903Z" level=info msg="CreateContainer within sandbox \"77a23c9f8607514eba8e9725c8f1b3b590fc86029cba185c19288fc366a329a9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0017d71c73c3398f14edb4d92eb627d8b26442eefc9e65337efc8589cc3c4476\"" Feb 13 15:58:18.180159 containerd[1756]: time="2025-02-13T15:58:18.179350104Z" level=info msg="StartContainer for \"0017d71c73c3398f14edb4d92eb627d8b26442eefc9e65337efc8589cc3c4476\"" Feb 13 15:58:18.209658 systemd[1]: Started cri-containerd-0017d71c73c3398f14edb4d92eb627d8b26442eefc9e65337efc8589cc3c4476.scope - libcontainer container 0017d71c73c3398f14edb4d92eb627d8b26442eefc9e65337efc8589cc3c4476. Feb 13 15:58:18.249204 containerd[1756]: time="2025-02-13T15:58:18.249149643Z" level=info msg="StartContainer for \"0017d71c73c3398f14edb4d92eb627d8b26442eefc9e65337efc8589cc3c4476\" returns successfully" Feb 13 15:58:18.516191 containerd[1756]: time="2025-02-13T15:58:18.516081655Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:18.520331 containerd[1756]: time="2025-02-13T15:58:18.519954383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:58:18.522142 containerd[1756]: time="2025-02-13T15:58:18.522105227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 417.692352ms" Feb 13 15:58:18.522142 containerd[1756]: time="2025-02-13T15:58:18.522145508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:58:18.523458 containerd[1756]: time="2025-02-13T15:58:18.523258230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:58:18.525271 containerd[1756]: time="2025-02-13T15:58:18.525101713Z" level=info msg="CreateContainer within sandbox \"a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:58:18.573120 containerd[1756]: time="2025-02-13T15:58:18.573026569Z" level=info msg="CreateContainer within sandbox \"a9547eb21c923e08f1642cbe1b915481e885ee5422d86d444ff4b9babca0c234\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe55bdc6028fd24e8d1db212b0f8a893bcdbd660913cce31af4d3bca6fc7b0bd\"" Feb 13 15:58:18.573692 containerd[1756]: time="2025-02-13T15:58:18.573664930Z" level=info msg="StartContainer for \"fe55bdc6028fd24e8d1db212b0f8a893bcdbd660913cce31af4d3bca6fc7b0bd\"" Feb 13 15:58:18.601515 systemd[1]: Started cri-containerd-fe55bdc6028fd24e8d1db212b0f8a893bcdbd660913cce31af4d3bca6fc7b0bd.scope - libcontainer container fe55bdc6028fd24e8d1db212b0f8a893bcdbd660913cce31af4d3bca6fc7b0bd. Feb 13 15:58:18.637114 containerd[1756]: time="2025-02-13T15:58:18.637032937Z" level=info msg="StartContainer for \"fe55bdc6028fd24e8d1db212b0f8a893bcdbd660913cce31af4d3bca6fc7b0bd\" returns successfully" Feb 13 15:58:18.918552 kubelet[3351]: I0213 15:58:18.917866 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5885744d75-9dqxt" podStartSLOduration=21.196466863 podStartE2EDuration="29.917808776s" podCreationTimestamp="2025-02-13 15:57:49 +0000 UTC" firstStartedPulling="2025-02-13 15:58:09.801259475 +0000 UTC m=+41.786550057" lastFinishedPulling="2025-02-13 15:58:18.522601388 +0000 UTC m=+50.507891970" observedRunningTime="2025-02-13 15:58:18.916133413 +0000 UTC m=+50.901423995" watchObservedRunningTime="2025-02-13 15:58:18.917808776 +0000 UTC m=+50.903099318" Feb 13 15:58:18.979172 kubelet[3351]: I0213 15:58:18.977539 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-684dd8d987-7vztn" podStartSLOduration=19.643913193 podStartE2EDuration="27.977497895s" podCreationTimestamp="2025-02-13 15:57:51 +0000 UTC" firstStartedPulling="2025-02-13 15:58:09.769637611 +0000 UTC m=+41.754928193" lastFinishedPulling="2025-02-13 15:58:18.103222353 +0000 UTC m=+50.088512895" observedRunningTime="2025-02-13 15:58:18.947057594 +0000 UTC m=+50.932348176" watchObservedRunningTime="2025-02-13 15:58:18.977497895 +0000 UTC m=+50.962788437" Feb 13 15:58:19.907549 kubelet[3351]: I0213 15:58:19.906987 3351 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:58:20.066028 containerd[1756]: time="2025-02-13T15:58:20.065569783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:20.069528 containerd[1756]: time="2025-02-13T15:58:20.069458911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:58:20.073761 containerd[1756]: time="2025-02-13T15:58:20.073709680Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:20.080677 containerd[1756]: time="2025-02-13T15:58:20.080627653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:20.082158 containerd[1756]: time="2025-02-13T15:58:20.082121416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.558832946s" Feb 13 15:58:20.082227 containerd[1756]: time="2025-02-13T15:58:20.082161857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:58:20.084210 containerd[1756]: time="2025-02-13T15:58:20.084173541Z" level=info msg="CreateContainer within sandbox \"6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:58:20.140417 containerd[1756]: time="2025-02-13T15:58:20.140364412Z" level=info msg="CreateContainer within sandbox \"6955356a6700072ac610b7685dc43c6186b3007b18cbbd42b6bc29ac2e0d4c54\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ec0bf8d7d20e0a945a6b9719b58a7836d7854a8158a1687341e6ef1415f1810e\"" Feb 13 15:58:20.141152 containerd[1756]: time="2025-02-13T15:58:20.140947934Z" level=info msg="StartContainer for \"ec0bf8d7d20e0a945a6b9719b58a7836d7854a8158a1687341e6ef1415f1810e\"" Feb 13 15:58:20.179500 systemd[1]: Started cri-containerd-ec0bf8d7d20e0a945a6b9719b58a7836d7854a8158a1687341e6ef1415f1810e.scope - libcontainer container ec0bf8d7d20e0a945a6b9719b58a7836d7854a8158a1687341e6ef1415f1810e. Feb 13 15:58:20.215947 containerd[1756]: time="2025-02-13T15:58:20.215898523Z" level=info msg="StartContainer for \"ec0bf8d7d20e0a945a6b9719b58a7836d7854a8158a1687341e6ef1415f1810e\" returns successfully" Feb 13 15:58:20.254393 kubelet[3351]: I0213 15:58:20.254258 3351 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:58:20.254393 kubelet[3351]: I0213 15:58:20.254326 3351 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:58:20.928069 kubelet[3351]: I0213 15:58:20.928022 3351 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-qbpxc" podStartSLOduration=19.497593018 podStartE2EDuration="29.927981822s" podCreationTimestamp="2025-02-13 15:57:51 +0000 UTC" firstStartedPulling="2025-02-13 15:58:09.652001613 +0000 UTC m=+41.637292195" lastFinishedPulling="2025-02-13 15:58:20.082390417 +0000 UTC m=+52.067680999" observedRunningTime="2025-02-13 15:58:20.927599461 +0000 UTC m=+52.912890043" watchObservedRunningTime="2025-02-13 15:58:20.927981822 +0000 UTC m=+52.913272404" Feb 13 15:58:28.150543 containerd[1756]: time="2025-02-13T15:58:28.150498368Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:28.150911 containerd[1756]: time="2025-02-13T15:58:28.150615368Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:28.150911 containerd[1756]: time="2025-02-13T15:58:28.150627128Z" level=info msg="StopPodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:28.151418 containerd[1756]: time="2025-02-13T15:58:28.151372370Z" level=info msg="RemovePodSandbox for \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:28.151518 containerd[1756]: time="2025-02-13T15:58:28.151418570Z" level=info msg="Forcibly stopping sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\"" Feb 13 15:58:28.151518 containerd[1756]: time="2025-02-13T15:58:28.151505130Z" level=info msg="TearDown network for sandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" successfully" Feb 13 15:58:28.163041 containerd[1756]: time="2025-02-13T15:58:28.162981833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.163185 containerd[1756]: time="2025-02-13T15:58:28.163069073Z" level=info msg="RemovePodSandbox \"9a4486960c427d0ee2091a01d9bb87772bd2f0864a521a4ec29d827b30c37f0c\" returns successfully" Feb 13 15:58:28.164158 containerd[1756]: time="2025-02-13T15:58:28.163624994Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:28.164158 containerd[1756]: time="2025-02-13T15:58:28.163732835Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:28.164158 containerd[1756]: time="2025-02-13T15:58:28.163743555Z" level=info msg="StopPodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:28.164579 containerd[1756]: time="2025-02-13T15:58:28.164487396Z" level=info msg="RemovePodSandbox for \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:28.164579 containerd[1756]: time="2025-02-13T15:58:28.164512916Z" level=info msg="Forcibly stopping sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\"" Feb 13 15:58:28.164753 containerd[1756]: time="2025-02-13T15:58:28.164581916Z" level=info msg="TearDown network for sandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" successfully" Feb 13 15:58:28.176208 containerd[1756]: time="2025-02-13T15:58:28.176145019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.176419 containerd[1756]: time="2025-02-13T15:58:28.176228140Z" level=info msg="RemovePodSandbox \"a95bc8551c76d48104942657754ad771042fdf725f890033cf5910649d7f81f2\" returns successfully" Feb 13 15:58:28.177190 containerd[1756]: time="2025-02-13T15:58:28.176940621Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:28.177190 containerd[1756]: time="2025-02-13T15:58:28.177134661Z" level=info msg="TearDown network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" successfully" Feb 13 15:58:28.177190 containerd[1756]: time="2025-02-13T15:58:28.177147462Z" level=info msg="StopPodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" returns successfully" Feb 13 15:58:28.177589 containerd[1756]: time="2025-02-13T15:58:28.177552542Z" level=info msg="RemovePodSandbox for \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:28.177639 containerd[1756]: time="2025-02-13T15:58:28.177596302Z" level=info msg="Forcibly stopping sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\"" Feb 13 15:58:28.177698 containerd[1756]: time="2025-02-13T15:58:28.177677383Z" level=info msg="TearDown network for sandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" successfully" Feb 13 15:58:28.186903 containerd[1756]: time="2025-02-13T15:58:28.186848161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.187026 containerd[1756]: time="2025-02-13T15:58:28.186930961Z" level=info msg="RemovePodSandbox \"0c24658a8fd339a4d7c71c9cff0fd88b30a50144a26acde2878d0fcfad3606c2\" returns successfully" Feb 13 15:58:28.187626 containerd[1756]: time="2025-02-13T15:58:28.187445922Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" Feb 13 15:58:28.187626 containerd[1756]: time="2025-02-13T15:58:28.187557322Z" level=info msg="TearDown network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" successfully" Feb 13 15:58:28.187626 containerd[1756]: time="2025-02-13T15:58:28.187567922Z" level=info msg="StopPodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" returns successfully" Feb 13 15:58:28.188409 containerd[1756]: time="2025-02-13T15:58:28.188011523Z" level=info msg="RemovePodSandbox for \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" Feb 13 15:58:28.188409 containerd[1756]: time="2025-02-13T15:58:28.188037763Z" level=info msg="Forcibly stopping sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\"" Feb 13 15:58:28.188409 containerd[1756]: time="2025-02-13T15:58:28.188103123Z" level=info msg="TearDown network for sandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" successfully" Feb 13 15:58:28.194947 containerd[1756]: time="2025-02-13T15:58:28.194895457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.195110 containerd[1756]: time="2025-02-13T15:58:28.194966777Z" level=info msg="RemovePodSandbox \"57f547b35a8f518fffda64c60efa655394a5684c4cf130bfb8a44cf7fc5ab60a\" returns successfully" Feb 13 15:58:28.195807 containerd[1756]: time="2025-02-13T15:58:28.195471698Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\"" Feb 13 15:58:28.195807 containerd[1756]: time="2025-02-13T15:58:28.195580178Z" level=info msg="TearDown network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" successfully" Feb 13 15:58:28.195807 containerd[1756]: time="2025-02-13T15:58:28.195590538Z" level=info msg="StopPodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" returns successfully" Feb 13 15:58:28.196318 containerd[1756]: time="2025-02-13T15:58:28.196187660Z" level=info msg="RemovePodSandbox for \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\"" Feb 13 15:58:28.196318 containerd[1756]: time="2025-02-13T15:58:28.196228420Z" level=info msg="Forcibly stopping sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\"" Feb 13 15:58:28.196477 containerd[1756]: time="2025-02-13T15:58:28.196420780Z" level=info msg="TearDown network for sandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" successfully" Feb 13 15:58:28.205447 containerd[1756]: time="2025-02-13T15:58:28.205384358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.205447 containerd[1756]: time="2025-02-13T15:58:28.205449598Z" level=info msg="RemovePodSandbox \"f819292ebc65eded895ba5e27073cf469dcf9dde9b0cb1bf8ef2fe91aafeaf72\" returns successfully" Feb 13 15:58:28.206056 containerd[1756]: time="2025-02-13T15:58:28.205889799Z" level=info msg="StopPodSandbox for \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\"" Feb 13 15:58:28.206056 containerd[1756]: time="2025-02-13T15:58:28.205990639Z" level=info msg="TearDown network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" successfully" Feb 13 15:58:28.206056 containerd[1756]: time="2025-02-13T15:58:28.206001039Z" level=info msg="StopPodSandbox for \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" returns successfully" Feb 13 15:58:28.206765 containerd[1756]: time="2025-02-13T15:58:28.206467240Z" level=info msg="RemovePodSandbox for \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\"" Feb 13 15:58:28.206765 containerd[1756]: time="2025-02-13T15:58:28.206504600Z" level=info msg="Forcibly stopping sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\"" Feb 13 15:58:28.206765 containerd[1756]: time="2025-02-13T15:58:28.206574200Z" level=info msg="TearDown network for sandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" successfully" Feb 13 15:58:28.217669 containerd[1756]: time="2025-02-13T15:58:28.217614063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.217812 containerd[1756]: time="2025-02-13T15:58:28.217708183Z" level=info msg="RemovePodSandbox \"2deaf90485fe2811a502971d3063e9fc99f5d20850bef46e19985ccdc53c80bb\" returns successfully" Feb 13 15:58:28.218420 containerd[1756]: time="2025-02-13T15:58:28.218228064Z" level=info msg="StopPodSandbox for \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\"" Feb 13 15:58:28.218420 containerd[1756]: time="2025-02-13T15:58:28.218354984Z" level=info msg="TearDown network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\" successfully" Feb 13 15:58:28.218420 containerd[1756]: time="2025-02-13T15:58:28.218366144Z" level=info msg="StopPodSandbox for \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\" returns successfully" Feb 13 15:58:28.219048 containerd[1756]: time="2025-02-13T15:58:28.218741025Z" level=info msg="RemovePodSandbox for \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\"" Feb 13 15:58:28.219048 containerd[1756]: time="2025-02-13T15:58:28.218765705Z" level=info msg="Forcibly stopping sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\"" Feb 13 15:58:28.219048 containerd[1756]: time="2025-02-13T15:58:28.218831105Z" level=info msg="TearDown network for sandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\" successfully" Feb 13 15:58:28.230118 containerd[1756]: time="2025-02-13T15:58:28.230034727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.230382 containerd[1756]: time="2025-02-13T15:58:28.230127008Z" level=info msg="RemovePodSandbox \"4bb8ab361adb50a6f2e7889880d8ddaf7bb3de0fe1fecec59547f50312196ccc\" returns successfully" Feb 13 15:58:28.230614 containerd[1756]: time="2025-02-13T15:58:28.230578568Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:28.230765 containerd[1756]: time="2025-02-13T15:58:28.230724009Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:28.230765 containerd[1756]: time="2025-02-13T15:58:28.230760329Z" level=info msg="StopPodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:28.231539 containerd[1756]: time="2025-02-13T15:58:28.231115610Z" level=info msg="RemovePodSandbox for \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:28.231539 containerd[1756]: time="2025-02-13T15:58:28.231142770Z" level=info msg="Forcibly stopping sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\"" Feb 13 15:58:28.231539 containerd[1756]: time="2025-02-13T15:58:28.231204370Z" level=info msg="TearDown network for sandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" successfully" Feb 13 15:58:28.244898 containerd[1756]: time="2025-02-13T15:58:28.244846917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.245420 containerd[1756]: time="2025-02-13T15:58:28.245390758Z" level=info msg="RemovePodSandbox \"755ebc0966dafc63d3c308f5062f0696bfe7ef14e9d738ecb0fbcd0d576f9f27\" returns successfully" Feb 13 15:58:28.246079 containerd[1756]: time="2025-02-13T15:58:28.246040719Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:28.246173 containerd[1756]: time="2025-02-13T15:58:28.246153440Z" level=info msg="TearDown network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" successfully" Feb 13 15:58:28.246173 containerd[1756]: time="2025-02-13T15:58:28.246168200Z" level=info msg="StopPodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" returns successfully" Feb 13 15:58:28.246866 containerd[1756]: time="2025-02-13T15:58:28.246829601Z" level=info msg="RemovePodSandbox for \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:28.246866 containerd[1756]: time="2025-02-13T15:58:28.246865081Z" level=info msg="Forcibly stopping sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\"" Feb 13 15:58:28.246971 containerd[1756]: time="2025-02-13T15:58:28.246939321Z" level=info msg="TearDown network for sandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" successfully" Feb 13 15:58:28.258104 containerd[1756]: time="2025-02-13T15:58:28.257932623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.258104 containerd[1756]: time="2025-02-13T15:58:28.258005303Z" level=info msg="RemovePodSandbox \"ebe135035bc0ca62ddb00b2c3718ac0728dfa1e54337208631cd6ab572b7c6d5\" returns successfully" Feb 13 15:58:28.258824 containerd[1756]: time="2025-02-13T15:58:28.258622265Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" Feb 13 15:58:28.258824 containerd[1756]: time="2025-02-13T15:58:28.258729625Z" level=info msg="TearDown network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" successfully" Feb 13 15:58:28.258824 containerd[1756]: time="2025-02-13T15:58:28.258739625Z" level=info msg="StopPodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" returns successfully" Feb 13 15:58:28.259123 containerd[1756]: time="2025-02-13T15:58:28.259096146Z" level=info msg="RemovePodSandbox for \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" Feb 13 15:58:28.259170 containerd[1756]: time="2025-02-13T15:58:28.259126026Z" level=info msg="Forcibly stopping sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\"" Feb 13 15:58:28.259195 containerd[1756]: time="2025-02-13T15:58:28.259186146Z" level=info msg="TearDown network for sandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" successfully" Feb 13 15:58:28.270441 containerd[1756]: time="2025-02-13T15:58:28.270321408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.270441 containerd[1756]: time="2025-02-13T15:58:28.270396408Z" level=info msg="RemovePodSandbox \"a471e097b928b0083bcf6420740281eff1187b6eca4debb6772f137dc6c8a0e2\" returns successfully" Feb 13 15:58:28.270937 containerd[1756]: time="2025-02-13T15:58:28.270771609Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\"" Feb 13 15:58:28.270937 containerd[1756]: time="2025-02-13T15:58:28.270874969Z" level=info msg="TearDown network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" successfully" Feb 13 15:58:28.270937 containerd[1756]: time="2025-02-13T15:58:28.270886569Z" level=info msg="StopPodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" returns successfully" Feb 13 15:58:28.271446 containerd[1756]: time="2025-02-13T15:58:28.271388890Z" level=info msg="RemovePodSandbox for \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\"" Feb 13 15:58:28.271446 containerd[1756]: time="2025-02-13T15:58:28.271417490Z" level=info msg="Forcibly stopping sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\"" Feb 13 15:58:28.272205 containerd[1756]: time="2025-02-13T15:58:28.271643811Z" level=info msg="TearDown network for sandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" successfully" Feb 13 15:58:28.281783 containerd[1756]: time="2025-02-13T15:58:28.281720351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.281887 containerd[1756]: time="2025-02-13T15:58:28.281804151Z" level=info msg="RemovePodSandbox \"788d64e4f3c8cc8d90613d90ecf41ec1dc73ec16cfc82c61c819897314fed5c6\" returns successfully" Feb 13 15:58:28.282509 containerd[1756]: time="2025-02-13T15:58:28.282271992Z" level=info msg="StopPodSandbox for \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\"" Feb 13 15:58:28.282509 containerd[1756]: time="2025-02-13T15:58:28.282401912Z" level=info msg="TearDown network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" successfully" Feb 13 15:58:28.282509 containerd[1756]: time="2025-02-13T15:58:28.282413832Z" level=info msg="StopPodSandbox for \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" returns successfully" Feb 13 15:58:28.283609 containerd[1756]: time="2025-02-13T15:58:28.282763033Z" level=info msg="RemovePodSandbox for \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\"" Feb 13 15:58:28.283609 containerd[1756]: time="2025-02-13T15:58:28.282796473Z" level=info msg="Forcibly stopping sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\"" Feb 13 15:58:28.283609 containerd[1756]: time="2025-02-13T15:58:28.282858233Z" level=info msg="TearDown network for sandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" successfully" Feb 13 15:58:28.296792 containerd[1756]: time="2025-02-13T15:58:28.296733221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.296959 containerd[1756]: time="2025-02-13T15:58:28.296808661Z" level=info msg="RemovePodSandbox \"d5e5b6cc6d69857047bbbe4f347bc28a45c65da546c1821142db0db78d0b7666\" returns successfully" Feb 13 15:58:28.297558 containerd[1756]: time="2025-02-13T15:58:28.297336662Z" level=info msg="StopPodSandbox for \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\"" Feb 13 15:58:28.297558 containerd[1756]: time="2025-02-13T15:58:28.297430902Z" level=info msg="TearDown network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\" successfully" Feb 13 15:58:28.297558 containerd[1756]: time="2025-02-13T15:58:28.297441662Z" level=info msg="StopPodSandbox for \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\" returns successfully" Feb 13 15:58:28.298217 containerd[1756]: time="2025-02-13T15:58:28.297836503Z" level=info msg="RemovePodSandbox for \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\"" Feb 13 15:58:28.298217 containerd[1756]: time="2025-02-13T15:58:28.297877743Z" level=info msg="Forcibly stopping sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\"" Feb 13 15:58:28.298217 containerd[1756]: time="2025-02-13T15:58:28.297946903Z" level=info msg="TearDown network for sandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\" successfully" Feb 13 15:58:28.311023 containerd[1756]: time="2025-02-13T15:58:28.310954009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.311450 containerd[1756]: time="2025-02-13T15:58:28.311428050Z" level=info msg="RemovePodSandbox \"86aed3c05c0adf8cf7647ecfaa2f9f572def1f31d867941db17b394741fd5388\" returns successfully" Feb 13 15:58:28.311879 containerd[1756]: time="2025-02-13T15:58:28.311857371Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:28.312323 containerd[1756]: time="2025-02-13T15:58:28.312160252Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:28.312323 containerd[1756]: time="2025-02-13T15:58:28.312189252Z" level=info msg="StopPodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:28.313624 containerd[1756]: time="2025-02-13T15:58:28.312556653Z" level=info msg="RemovePodSandbox for \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:28.313624 containerd[1756]: time="2025-02-13T15:58:28.312582773Z" level=info msg="Forcibly stopping sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\"" Feb 13 15:58:28.313624 containerd[1756]: time="2025-02-13T15:58:28.312646973Z" level=info msg="TearDown network for sandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" successfully" Feb 13 15:58:28.325748 containerd[1756]: time="2025-02-13T15:58:28.325703719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.325957 containerd[1756]: time="2025-02-13T15:58:28.325770959Z" level=info msg="RemovePodSandbox \"8423f7be653e1d50cd90f23005d2aaa11b04ed6cd33a9da53c1551f9722a4192\" returns successfully" Feb 13 15:58:28.326703 containerd[1756]: time="2025-02-13T15:58:28.326546721Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:28.326703 containerd[1756]: time="2025-02-13T15:58:28.326652001Z" level=info msg="TearDown network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" successfully" Feb 13 15:58:28.326703 containerd[1756]: time="2025-02-13T15:58:28.326661721Z" level=info msg="StopPodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" returns successfully" Feb 13 15:58:28.327002 containerd[1756]: time="2025-02-13T15:58:28.326969361Z" level=info msg="RemovePodSandbox for \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:28.327037 containerd[1756]: time="2025-02-13T15:58:28.327003081Z" level=info msg="Forcibly stopping sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\"" Feb 13 15:58:28.327090 containerd[1756]: time="2025-02-13T15:58:28.327071682Z" level=info msg="TearDown network for sandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" successfully" Feb 13 15:58:28.337346 containerd[1756]: time="2025-02-13T15:58:28.337131422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.337346 containerd[1756]: time="2025-02-13T15:58:28.337217462Z" level=info msg="RemovePodSandbox \"280d57e77963f19e086253c47bf0defefb946de9af1c840c8217610cf345e1bb\" returns successfully" Feb 13 15:58:28.337717 containerd[1756]: time="2025-02-13T15:58:28.337689223Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" Feb 13 15:58:28.337814 containerd[1756]: time="2025-02-13T15:58:28.337793783Z" level=info msg="TearDown network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" successfully" Feb 13 15:58:28.337814 containerd[1756]: time="2025-02-13T15:58:28.337810303Z" level=info msg="StopPodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" returns successfully" Feb 13 15:58:28.338161 containerd[1756]: time="2025-02-13T15:58:28.338131904Z" level=info msg="RemovePodSandbox for \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" Feb 13 15:58:28.338230 containerd[1756]: time="2025-02-13T15:58:28.338162704Z" level=info msg="Forcibly stopping sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\"" Feb 13 15:58:28.338266 containerd[1756]: time="2025-02-13T15:58:28.338229224Z" level=info msg="TearDown network for sandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" successfully" Feb 13 15:58:28.350523 containerd[1756]: time="2025-02-13T15:58:28.350465408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.350820 containerd[1756]: time="2025-02-13T15:58:28.350537529Z" level=info msg="RemovePodSandbox \"da329725e95c731c15321590d37b7ebb4b117705676326a6a24cd4ad05116dc6\" returns successfully" Feb 13 15:58:28.351322 containerd[1756]: time="2025-02-13T15:58:28.350975889Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\"" Feb 13 15:58:28.351322 containerd[1756]: time="2025-02-13T15:58:28.351069490Z" level=info msg="TearDown network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" successfully" Feb 13 15:58:28.351322 containerd[1756]: time="2025-02-13T15:58:28.351078170Z" level=info msg="StopPodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" returns successfully" Feb 13 15:58:28.351719 containerd[1756]: time="2025-02-13T15:58:28.351671051Z" level=info msg="RemovePodSandbox for \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\"" Feb 13 15:58:28.351757 containerd[1756]: time="2025-02-13T15:58:28.351718531Z" level=info msg="Forcibly stopping sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\"" Feb 13 15:58:28.351804 containerd[1756]: time="2025-02-13T15:58:28.351785331Z" level=info msg="TearDown network for sandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" successfully" Feb 13 15:58:28.362635 containerd[1756]: time="2025-02-13T15:58:28.362582153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.362782 containerd[1756]: time="2025-02-13T15:58:28.362656273Z" level=info msg="RemovePodSandbox \"c411e229a992bce4634e93348764384d48e682e2fc6adfc395a1d48172c4baaf\" returns successfully" Feb 13 15:58:28.363555 containerd[1756]: time="2025-02-13T15:58:28.363517795Z" level=info msg="StopPodSandbox for \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\"" Feb 13 15:58:28.363658 containerd[1756]: time="2025-02-13T15:58:28.363624635Z" level=info msg="TearDown network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" successfully" Feb 13 15:58:28.363658 containerd[1756]: time="2025-02-13T15:58:28.363633955Z" level=info msg="StopPodSandbox for \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" returns successfully" Feb 13 15:58:28.364600 containerd[1756]: time="2025-02-13T15:58:28.364566957Z" level=info msg="RemovePodSandbox for \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\"" Feb 13 15:58:28.364671 containerd[1756]: time="2025-02-13T15:58:28.364602477Z" level=info msg="Forcibly stopping sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\"" Feb 13 15:58:28.364709 containerd[1756]: time="2025-02-13T15:58:28.364682677Z" level=info msg="TearDown network for sandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" successfully" Feb 13 15:58:28.377593 containerd[1756]: time="2025-02-13T15:58:28.377547423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.377743 containerd[1756]: time="2025-02-13T15:58:28.377627343Z" level=info msg="RemovePodSandbox \"043a916f92eb33c7c4864ff3c9bdb58bca7cca0e9f0e0ecee4d5455bff862cd8\" returns successfully" Feb 13 15:58:28.378616 containerd[1756]: time="2025-02-13T15:58:28.378482145Z" level=info msg="StopPodSandbox for \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\"" Feb 13 15:58:28.378716 containerd[1756]: time="2025-02-13T15:58:28.378645225Z" level=info msg="TearDown network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\" successfully" Feb 13 15:58:28.378716 containerd[1756]: time="2025-02-13T15:58:28.378661065Z" level=info msg="StopPodSandbox for \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\" returns successfully" Feb 13 15:58:28.379024 containerd[1756]: time="2025-02-13T15:58:28.378990346Z" level=info msg="RemovePodSandbox for \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\"" Feb 13 15:58:28.379053 containerd[1756]: time="2025-02-13T15:58:28.379027346Z" level=info msg="Forcibly stopping sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\"" Feb 13 15:58:28.379114 containerd[1756]: time="2025-02-13T15:58:28.379092226Z" level=info msg="TearDown network for sandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\" successfully" Feb 13 15:58:28.390173 containerd[1756]: time="2025-02-13T15:58:28.390104968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.390583 containerd[1756]: time="2025-02-13T15:58:28.390184928Z" level=info msg="RemovePodSandbox \"e0d68a0503b3ba66301e3332c6454e61b3d0e3e75652c6df7ee96b18c3fd3706\" returns successfully" Feb 13 15:58:28.390679 containerd[1756]: time="2025-02-13T15:58:28.390637089Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:28.390767 containerd[1756]: time="2025-02-13T15:58:28.390744769Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:28.390767 containerd[1756]: time="2025-02-13T15:58:28.390762449Z" level=info msg="StopPodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:28.392433 containerd[1756]: time="2025-02-13T15:58:28.391043010Z" level=info msg="RemovePodSandbox for \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:28.392433 containerd[1756]: time="2025-02-13T15:58:28.391070250Z" level=info msg="Forcibly stopping sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\"" Feb 13 15:58:28.392433 containerd[1756]: time="2025-02-13T15:58:28.391132970Z" level=info msg="TearDown network for sandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" successfully" Feb 13 15:58:28.403422 containerd[1756]: time="2025-02-13T15:58:28.403155234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.403422 containerd[1756]: time="2025-02-13T15:58:28.403240034Z" level=info msg="RemovePodSandbox \"fd8864c98a935398947978c69d79482363120fc2889bcf5256f44bf180cd0e52\" returns successfully" Feb 13 15:58:28.404205 containerd[1756]: time="2025-02-13T15:58:28.404173796Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:28.404307 containerd[1756]: time="2025-02-13T15:58:28.404275276Z" level=info msg="TearDown network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" successfully" Feb 13 15:58:28.404362 containerd[1756]: time="2025-02-13T15:58:28.404289316Z" level=info msg="StopPodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" returns successfully" Feb 13 15:58:28.404920 containerd[1756]: time="2025-02-13T15:58:28.404770757Z" level=info msg="RemovePodSandbox for \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:28.404920 containerd[1756]: time="2025-02-13T15:58:28.404799197Z" level=info msg="Forcibly stopping sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\"" Feb 13 15:58:28.404920 containerd[1756]: time="2025-02-13T15:58:28.404863917Z" level=info msg="TearDown network for sandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" successfully" Feb 13 15:58:28.430776 containerd[1756]: time="2025-02-13T15:58:28.430725609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.430776 containerd[1756]: time="2025-02-13T15:58:28.430818649Z" level=info msg="RemovePodSandbox \"208d0a1356df9362876b45bea54fba8056fcc311c5adec19d873cebb3faf0d09\" returns successfully" Feb 13 15:58:28.431731 containerd[1756]: time="2025-02-13T15:58:28.431619131Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" Feb 13 15:58:28.431919 containerd[1756]: time="2025-02-13T15:58:28.431831811Z" level=info msg="TearDown network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" successfully" Feb 13 15:58:28.431919 containerd[1756]: time="2025-02-13T15:58:28.431846211Z" level=info msg="StopPodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" returns successfully" Feb 13 15:58:28.433436 containerd[1756]: time="2025-02-13T15:58:28.432155132Z" level=info msg="RemovePodSandbox for \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" Feb 13 15:58:28.433436 containerd[1756]: time="2025-02-13T15:58:28.432177052Z" level=info msg="Forcibly stopping sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\"" Feb 13 15:58:28.433436 containerd[1756]: time="2025-02-13T15:58:28.432233652Z" level=info msg="TearDown network for sandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" successfully" Feb 13 15:58:28.442988 containerd[1756]: time="2025-02-13T15:58:28.442942194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.443486 containerd[1756]: time="2025-02-13T15:58:28.443238394Z" level=info msg="RemovePodSandbox \"8bdda9a879f2d88e65669f537e1f2f099c3c033f1012352fb9ea1acf721142fc\" returns successfully" Feb 13 15:58:28.444100 containerd[1756]: time="2025-02-13T15:58:28.443916436Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\"" Feb 13 15:58:28.444100 containerd[1756]: time="2025-02-13T15:58:28.444032796Z" level=info msg="TearDown network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" successfully" Feb 13 15:58:28.444100 containerd[1756]: time="2025-02-13T15:58:28.444042996Z" level=info msg="StopPodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" returns successfully" Feb 13 15:58:28.444777 containerd[1756]: time="2025-02-13T15:58:28.444629877Z" level=info msg="RemovePodSandbox for \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\"" Feb 13 15:58:28.444777 containerd[1756]: time="2025-02-13T15:58:28.444656637Z" level=info msg="Forcibly stopping sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\"" Feb 13 15:58:28.444777 containerd[1756]: time="2025-02-13T15:58:28.444726597Z" level=info msg="TearDown network for sandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" successfully" Feb 13 15:58:28.455291 containerd[1756]: time="2025-02-13T15:58:28.455237218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.455472 containerd[1756]: time="2025-02-13T15:58:28.455339698Z" level=info msg="RemovePodSandbox \"ee7ac58126e398118fa9a5492cea6ea1ef02fa243401223a3e7aa7e030a0e221\" returns successfully" Feb 13 15:58:28.456076 containerd[1756]: time="2025-02-13T15:58:28.455872019Z" level=info msg="StopPodSandbox for \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\"" Feb 13 15:58:28.456076 containerd[1756]: time="2025-02-13T15:58:28.455994780Z" level=info msg="TearDown network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" successfully" Feb 13 15:58:28.456076 containerd[1756]: time="2025-02-13T15:58:28.456009420Z" level=info msg="StopPodSandbox for \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" returns successfully" Feb 13 15:58:28.456494 containerd[1756]: time="2025-02-13T15:58:28.456444061Z" level=info msg="RemovePodSandbox for \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\"" Feb 13 15:58:28.456494 containerd[1756]: time="2025-02-13T15:58:28.456469221Z" level=info msg="Forcibly stopping sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\"" Feb 13 15:58:28.456854 containerd[1756]: time="2025-02-13T15:58:28.456768781Z" level=info msg="TearDown network for sandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" successfully" Feb 13 15:58:28.468359 containerd[1756]: time="2025-02-13T15:58:28.468289164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.468528 containerd[1756]: time="2025-02-13T15:58:28.468379404Z" level=info msg="RemovePodSandbox \"a380e9d829f4e64dd65fee8756549a9e14052cd01f9848b5ae8b94afcd5d84bf\" returns successfully" Feb 13 15:58:28.468859 containerd[1756]: time="2025-02-13T15:58:28.468830485Z" level=info msg="StopPodSandbox for \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\"" Feb 13 15:58:28.469002 containerd[1756]: time="2025-02-13T15:58:28.468953406Z" level=info msg="TearDown network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\" successfully" Feb 13 15:58:28.469002 containerd[1756]: time="2025-02-13T15:58:28.468969646Z" level=info msg="StopPodSandbox for \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\" returns successfully" Feb 13 15:58:28.469721 containerd[1756]: time="2025-02-13T15:58:28.469498967Z" level=info msg="RemovePodSandbox for \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\"" Feb 13 15:58:28.469721 containerd[1756]: time="2025-02-13T15:58:28.469551967Z" level=info msg="Forcibly stopping sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\"" Feb 13 15:58:28.469721 containerd[1756]: time="2025-02-13T15:58:28.469638287Z" level=info msg="TearDown network for sandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\" successfully" Feb 13 15:58:28.485943 containerd[1756]: time="2025-02-13T15:58:28.485894160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.486142 containerd[1756]: time="2025-02-13T15:58:28.485974840Z" level=info msg="RemovePodSandbox \"6fce9a1bed436636da8f3640a1dfc1110dd83b6270ca450213e4f317d836bbe8\" returns successfully" Feb 13 15:58:28.488317 containerd[1756]: time="2025-02-13T15:58:28.486603081Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:28.488317 containerd[1756]: time="2025-02-13T15:58:28.486915762Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:28.488317 containerd[1756]: time="2025-02-13T15:58:28.486947242Z" level=info msg="StopPodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:28.489441 containerd[1756]: time="2025-02-13T15:58:28.489413167Z" level=info msg="RemovePodSandbox for \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:28.489562 containerd[1756]: time="2025-02-13T15:58:28.489541367Z" level=info msg="Forcibly stopping sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\"" Feb 13 15:58:28.489734 containerd[1756]: time="2025-02-13T15:58:28.489708927Z" level=info msg="TearDown network for sandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" successfully" Feb 13 15:58:28.499673 containerd[1756]: time="2025-02-13T15:58:28.499621547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.499819 containerd[1756]: time="2025-02-13T15:58:28.499694267Z" level=info msg="RemovePodSandbox \"bf33e5cf697ecc143bd0c2543f15f69716543acb924d23f1ec6e0268ae9d6788\" returns successfully" Feb 13 15:58:28.500203 containerd[1756]: time="2025-02-13T15:58:28.500165068Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:28.500509 containerd[1756]: time="2025-02-13T15:58:28.500492349Z" level=info msg="TearDown network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" successfully" Feb 13 15:58:28.500589 containerd[1756]: time="2025-02-13T15:58:28.500576429Z" level=info msg="StopPodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" returns successfully" Feb 13 15:58:28.501017 containerd[1756]: time="2025-02-13T15:58:28.501000630Z" level=info msg="RemovePodSandbox for \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:28.501118 containerd[1756]: time="2025-02-13T15:58:28.501103550Z" level=info msg="Forcibly stopping sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\"" Feb 13 15:58:28.501217 containerd[1756]: time="2025-02-13T15:58:28.501204350Z" level=info msg="TearDown network for sandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" successfully" Feb 13 15:58:28.511383 containerd[1756]: time="2025-02-13T15:58:28.511347570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.511589 containerd[1756]: time="2025-02-13T15:58:28.511572811Z" level=info msg="RemovePodSandbox \"477ccf81d8b65f00a8ccb885eaaff3bc1ede3831b8cb3169fa3544564dbbed67\" returns successfully" Feb 13 15:58:28.512101 containerd[1756]: time="2025-02-13T15:58:28.512079172Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" Feb 13 15:58:28.512331 containerd[1756]: time="2025-02-13T15:58:28.512286652Z" level=info msg="TearDown network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" successfully" Feb 13 15:58:28.512414 containerd[1756]: time="2025-02-13T15:58:28.512400693Z" level=info msg="StopPodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" returns successfully" Feb 13 15:58:28.512897 containerd[1756]: time="2025-02-13T15:58:28.512861934Z" level=info msg="RemovePodSandbox for \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" Feb 13 15:58:28.512897 containerd[1756]: time="2025-02-13T15:58:28.512896694Z" level=info msg="Forcibly stopping sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\"" Feb 13 15:58:28.512991 containerd[1756]: time="2025-02-13T15:58:28.512980494Z" level=info msg="TearDown network for sandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" successfully" Feb 13 15:58:28.522451 containerd[1756]: time="2025-02-13T15:58:28.522407393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.522566 containerd[1756]: time="2025-02-13T15:58:28.522476913Z" level=info msg="RemovePodSandbox \"7584fe74b0221a2bc3fb1b9c46fa6fc5c801261cb89ad4e74ac74f9de329a005\" returns successfully" Feb 13 15:58:28.522997 containerd[1756]: time="2025-02-13T15:58:28.522974754Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\"" Feb 13 15:58:28.523166 containerd[1756]: time="2025-02-13T15:58:28.523152714Z" level=info msg="TearDown network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" successfully" Feb 13 15:58:28.523223 containerd[1756]: time="2025-02-13T15:58:28.523212194Z" level=info msg="StopPodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" returns successfully" Feb 13 15:58:28.524129 containerd[1756]: time="2025-02-13T15:58:28.523631435Z" level=info msg="RemovePodSandbox for \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\"" Feb 13 15:58:28.524129 containerd[1756]: time="2025-02-13T15:58:28.523661715Z" level=info msg="Forcibly stopping sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\"" Feb 13 15:58:28.524129 containerd[1756]: time="2025-02-13T15:58:28.523755475Z" level=info msg="TearDown network for sandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" successfully" Feb 13 15:58:28.533005 containerd[1756]: time="2025-02-13T15:58:28.532959254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.533444 containerd[1756]: time="2025-02-13T15:58:28.533024534Z" level=info msg="RemovePodSandbox \"4432f8bacdc842b6fafc7a15968254049f6bbca111332ef725f9951f5b8978a8\" returns successfully" Feb 13 15:58:28.533801 containerd[1756]: time="2025-02-13T15:58:28.533771975Z" level=info msg="StopPodSandbox for \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\"" Feb 13 15:58:28.533903 containerd[1756]: time="2025-02-13T15:58:28.533878256Z" level=info msg="TearDown network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" successfully" Feb 13 15:58:28.533903 containerd[1756]: time="2025-02-13T15:58:28.533894296Z" level=info msg="StopPodSandbox for \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" returns successfully" Feb 13 15:58:28.534352 containerd[1756]: time="2025-02-13T15:58:28.534311296Z" level=info msg="RemovePodSandbox for \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\"" Feb 13 15:58:28.534352 containerd[1756]: time="2025-02-13T15:58:28.534335097Z" level=info msg="Forcibly stopping sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\"" Feb 13 15:58:28.534470 containerd[1756]: time="2025-02-13T15:58:28.534390897Z" level=info msg="TearDown network for sandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" successfully" Feb 13 15:58:28.543303 containerd[1756]: time="2025-02-13T15:58:28.543243554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.543404 containerd[1756]: time="2025-02-13T15:58:28.543360075Z" level=info msg="RemovePodSandbox \"bf52af6cabe8eaf5942a918d7275b690635c2e7eef9b561bd0d84b87e59b2f76\" returns successfully" Feb 13 15:58:28.544005 containerd[1756]: time="2025-02-13T15:58:28.543811835Z" level=info msg="StopPodSandbox for \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\"" Feb 13 15:58:28.544005 containerd[1756]: time="2025-02-13T15:58:28.543928836Z" level=info msg="TearDown network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\" successfully" Feb 13 15:58:28.544005 containerd[1756]: time="2025-02-13T15:58:28.543938436Z" level=info msg="StopPodSandbox for \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\" returns successfully" Feb 13 15:58:28.547929 containerd[1756]: time="2025-02-13T15:58:28.546477961Z" level=info msg="RemovePodSandbox for \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\"" Feb 13 15:58:28.547929 containerd[1756]: time="2025-02-13T15:58:28.546512321Z" level=info msg="Forcibly stopping sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\"" Feb 13 15:58:28.547929 containerd[1756]: time="2025-02-13T15:58:28.546611921Z" level=info msg="TearDown network for sandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\" successfully" Feb 13 15:58:28.556795 containerd[1756]: time="2025-02-13T15:58:28.556756221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.557027 containerd[1756]: time="2025-02-13T15:58:28.557011582Z" level=info msg="RemovePodSandbox \"411fcc6ee81e37c01b366dc6fbd15a78617df9e1c9910f7c846b2c728b806493\" returns successfully" Feb 13 15:58:28.557706 containerd[1756]: time="2025-02-13T15:58:28.557668903Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:28.557820 containerd[1756]: time="2025-02-13T15:58:28.557797543Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:28.557820 containerd[1756]: time="2025-02-13T15:58:28.557814864Z" level=info msg="StopPodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:28.558327 containerd[1756]: time="2025-02-13T15:58:28.558267744Z" level=info msg="RemovePodSandbox for \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:28.559121 containerd[1756]: time="2025-02-13T15:58:28.558306024Z" level=info msg="Forcibly stopping sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\"" Feb 13 15:58:28.559121 containerd[1756]: time="2025-02-13T15:58:28.558487665Z" level=info msg="TearDown network for sandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" successfully" Feb 13 15:58:28.568936 containerd[1756]: time="2025-02-13T15:58:28.568875086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.569325 containerd[1756]: time="2025-02-13T15:58:28.568951966Z" level=info msg="RemovePodSandbox \"afd56b6bd04eb5bfd4d9e2af5bf9bfddd26a59400052c1b2f22d933cfaece4ec\" returns successfully" Feb 13 15:58:28.569743 containerd[1756]: time="2025-02-13T15:58:28.569560767Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:28.569743 containerd[1756]: time="2025-02-13T15:58:28.569668207Z" level=info msg="TearDown network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" successfully" Feb 13 15:58:28.569743 containerd[1756]: time="2025-02-13T15:58:28.569678567Z" level=info msg="StopPodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" returns successfully" Feb 13 15:58:28.570561 containerd[1756]: time="2025-02-13T15:58:28.569965528Z" level=info msg="RemovePodSandbox for \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:28.570561 containerd[1756]: time="2025-02-13T15:58:28.569994888Z" level=info msg="Forcibly stopping sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\"" Feb 13 15:58:28.570561 containerd[1756]: time="2025-02-13T15:58:28.570052448Z" level=info msg="TearDown network for sandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" successfully" Feb 13 15:58:28.580109 containerd[1756]: time="2025-02-13T15:58:28.580069908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.580394 containerd[1756]: time="2025-02-13T15:58:28.580373109Z" level=info msg="RemovePodSandbox \"3185e86bed4f1b35c4112384737d21116ce8d693e2e239a6489429f22f262b01\" returns successfully" Feb 13 15:58:28.580859 containerd[1756]: time="2025-02-13T15:58:28.580837990Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" Feb 13 15:58:28.581164 containerd[1756]: time="2025-02-13T15:58:28.581136430Z" level=info msg="TearDown network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" successfully" Feb 13 15:58:28.581241 containerd[1756]: time="2025-02-13T15:58:28.581225950Z" level=info msg="StopPodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" returns successfully" Feb 13 15:58:28.581666 containerd[1756]: time="2025-02-13T15:58:28.581647271Z" level=info msg="RemovePodSandbox for \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" Feb 13 15:58:28.582017 containerd[1756]: time="2025-02-13T15:58:28.581842872Z" level=info msg="Forcibly stopping sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\"" Feb 13 15:58:28.582017 containerd[1756]: time="2025-02-13T15:58:28.581923752Z" level=info msg="TearDown network for sandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" successfully" Feb 13 15:58:28.590952 containerd[1756]: time="2025-02-13T15:58:28.590738369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.590952 containerd[1756]: time="2025-02-13T15:58:28.590816450Z" level=info msg="RemovePodSandbox \"072a3a44c78a93808e596e361fa24b426423eef6612b9fe0bc1c3b3c211861d8\" returns successfully" Feb 13 15:58:28.591248 containerd[1756]: time="2025-02-13T15:58:28.591170090Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\"" Feb 13 15:58:28.591287 containerd[1756]: time="2025-02-13T15:58:28.591278770Z" level=info msg="TearDown network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" successfully" Feb 13 15:58:28.591356 containerd[1756]: time="2025-02-13T15:58:28.591289451Z" level=info msg="StopPodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" returns successfully" Feb 13 15:58:28.592317 containerd[1756]: time="2025-02-13T15:58:28.591701571Z" level=info msg="RemovePodSandbox for \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\"" Feb 13 15:58:28.592317 containerd[1756]: time="2025-02-13T15:58:28.591746571Z" level=info msg="Forcibly stopping sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\"" Feb 13 15:58:28.592317 containerd[1756]: time="2025-02-13T15:58:28.591822772Z" level=info msg="TearDown network for sandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" successfully" Feb 13 15:58:28.603353 containerd[1756]: time="2025-02-13T15:58:28.603288675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.603555 containerd[1756]: time="2025-02-13T15:58:28.603386235Z" level=info msg="RemovePodSandbox \"028277f06583f430d7ad1c52711070f08d1d49d547664006af8807fa2da9d025\" returns successfully" Feb 13 15:58:28.603994 containerd[1756]: time="2025-02-13T15:58:28.603970556Z" level=info msg="StopPodSandbox for \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\"" Feb 13 15:58:28.604266 containerd[1756]: time="2025-02-13T15:58:28.604210236Z" level=info msg="TearDown network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" successfully" Feb 13 15:58:28.604266 containerd[1756]: time="2025-02-13T15:58:28.604225956Z" level=info msg="StopPodSandbox for \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" returns successfully" Feb 13 15:58:28.606041 containerd[1756]: time="2025-02-13T15:58:28.604581677Z" level=info msg="RemovePodSandbox for \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\"" Feb 13 15:58:28.606041 containerd[1756]: time="2025-02-13T15:58:28.604604917Z" level=info msg="Forcibly stopping sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\"" Feb 13 15:58:28.606041 containerd[1756]: time="2025-02-13T15:58:28.604684517Z" level=info msg="TearDown network for sandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" successfully" Feb 13 15:58:28.613818 containerd[1756]: time="2025-02-13T15:58:28.613782496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.614039 containerd[1756]: time="2025-02-13T15:58:28.614023096Z" level=info msg="RemovePodSandbox \"f9636268d73240605397e630ace72223fea4484db6a60d85f62c3d98a516c5db\" returns successfully" Feb 13 15:58:28.614586 containerd[1756]: time="2025-02-13T15:58:28.614555297Z" level=info msg="StopPodSandbox for \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\"" Feb 13 15:58:28.614701 containerd[1756]: time="2025-02-13T15:58:28.614678057Z" level=info msg="TearDown network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\" successfully" Feb 13 15:58:28.614701 containerd[1756]: time="2025-02-13T15:58:28.614696417Z" level=info msg="StopPodSandbox for \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\" returns successfully" Feb 13 15:58:28.614977 containerd[1756]: time="2025-02-13T15:58:28.614951338Z" level=info msg="RemovePodSandbox for \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\"" Feb 13 15:58:28.615077 containerd[1756]: time="2025-02-13T15:58:28.614977698Z" level=info msg="Forcibly stopping sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\"" Feb 13 15:58:28.615077 containerd[1756]: time="2025-02-13T15:58:28.615045618Z" level=info msg="TearDown network for sandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\" successfully" Feb 13 15:58:28.623631 containerd[1756]: time="2025-02-13T15:58:28.623576995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:58:28.623741 containerd[1756]: time="2025-02-13T15:58:28.623676115Z" level=info msg="RemovePodSandbox \"e96151856727907584084bcd392bd916e00119a8c4fcf118c550967278bf53b2\" returns successfully" Feb 13 15:58:45.243949 kubelet[3351]: I0213 15:58:45.243583 3351 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:58:45.308742 kubelet[3351]: I0213 15:58:45.308704 3351 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:59:55.631654 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:37784.service - OpenSSH per-connection server daemon (10.200.16.10:37784). Feb 13 15:59:56.105438 sshd[6564]: Accepted publickey for core from 10.200.16.10 port 37784 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 15:59:56.107987 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:59:56.113150 systemd-logind[1709]: New session 10 of user core. Feb 13 15:59:56.119543 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:59:56.517101 sshd[6566]: Connection closed by 10.200.16.10 port 37784 Feb 13 15:59:56.517731 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Feb 13 15:59:56.521936 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:37784.service: Deactivated successfully. Feb 13 15:59:56.523784 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:59:56.524773 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:59:56.526018 systemd-logind[1709]: Removed session 10. Feb 13 16:00:01.604168 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:40318.service - OpenSSH per-connection server daemon (10.200.16.10:40318). Feb 13 16:00:02.093101 sshd[6600]: Accepted publickey for core from 10.200.16.10 port 40318 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:02.094767 sshd-session[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:02.100567 systemd-logind[1709]: New session 11 of user core. Feb 13 16:00:02.108109 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:00:02.502616 sshd[6622]: Connection closed by 10.200.16.10 port 40318 Feb 13 16:00:02.501915 sshd-session[6600]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:02.505380 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:00:02.505776 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:40318.service: Deactivated successfully. Feb 13 16:00:02.507776 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:00:02.511530 systemd-logind[1709]: Removed session 11. Feb 13 16:00:07.595642 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:40324.service - OpenSSH per-connection server daemon (10.200.16.10:40324). Feb 13 16:00:08.081939 sshd[6634]: Accepted publickey for core from 10.200.16.10 port 40324 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:08.083496 sshd-session[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:08.087877 systemd-logind[1709]: New session 12 of user core. Feb 13 16:00:08.091596 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:00:08.508337 sshd[6636]: Connection closed by 10.200.16.10 port 40324 Feb 13 16:00:08.509173 sshd-session[6634]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:08.513251 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:40324.service: Deactivated successfully. Feb 13 16:00:08.515719 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:00:08.517018 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:00:08.518077 systemd-logind[1709]: Removed session 12. Feb 13 16:00:13.589454 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:50310.service - OpenSSH per-connection server daemon (10.200.16.10:50310). Feb 13 16:00:14.042701 sshd[6650]: Accepted publickey for core from 10.200.16.10 port 50310 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:14.044094 sshd-session[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:14.049147 systemd-logind[1709]: New session 13 of user core. Feb 13 16:00:14.053501 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:00:14.457315 sshd[6652]: Connection closed by 10.200.16.10 port 50310 Feb 13 16:00:14.457081 sshd-session[6650]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:14.461564 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:50310.service: Deactivated successfully. Feb 13 16:00:14.463987 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:00:14.465153 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:00:14.466141 systemd-logind[1709]: Removed session 13. Feb 13 16:00:19.543599 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:48806.service - OpenSSH per-connection server daemon (10.200.16.10:48806). Feb 13 16:00:19.998903 sshd[6664]: Accepted publickey for core from 10.200.16.10 port 48806 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:20.000422 sshd-session[6664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:20.005609 systemd-logind[1709]: New session 14 of user core. Feb 13 16:00:20.010528 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:00:20.409416 sshd[6666]: Connection closed by 10.200.16.10 port 48806 Feb 13 16:00:20.410191 sshd-session[6664]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:20.413676 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:48806.service: Deactivated successfully. Feb 13 16:00:20.416321 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:00:20.417614 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:00:20.418839 systemd-logind[1709]: Removed session 14. Feb 13 16:00:20.503902 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:48812.service - OpenSSH per-connection server daemon (10.200.16.10:48812). Feb 13 16:00:20.986878 sshd[6678]: Accepted publickey for core from 10.200.16.10 port 48812 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:20.988241 sshd-session[6678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:20.992215 systemd-logind[1709]: New session 15 of user core. Feb 13 16:00:21.001630 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:00:21.444737 sshd[6680]: Connection closed by 10.200.16.10 port 48812 Feb 13 16:00:21.445577 sshd-session[6678]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:21.450152 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:00:21.450432 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:48812.service: Deactivated successfully. Feb 13 16:00:21.452680 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:00:21.453874 systemd-logind[1709]: Removed session 15. Feb 13 16:00:21.533520 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:48814.service - OpenSSH per-connection server daemon (10.200.16.10:48814). Feb 13 16:00:22.039149 sshd[6688]: Accepted publickey for core from 10.200.16.10 port 48814 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:22.040556 sshd-session[6688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:22.045190 systemd-logind[1709]: New session 16 of user core. Feb 13 16:00:22.052535 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:00:22.457921 sshd[6690]: Connection closed by 10.200.16.10 port 48814 Feb 13 16:00:22.458534 sshd-session[6688]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:22.461615 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:48814.service: Deactivated successfully. Feb 13 16:00:22.463461 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:00:22.464982 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:00:22.466614 systemd-logind[1709]: Removed session 16. Feb 13 16:00:27.546621 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:48820.service - OpenSSH per-connection server daemon (10.200.16.10:48820). Feb 13 16:00:27.996383 sshd[6701]: Accepted publickey for core from 10.200.16.10 port 48820 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:27.998433 sshd-session[6701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:28.006858 systemd[1]: run-containerd-runc-k8s.io-91f8579597d0d73d73b2d52edd3a6e2a80566b47a1f624eb2ed5d2ee835d5f5a-runc.Wg2uWN.mount: Deactivated successfully. Feb 13 16:00:28.013975 systemd-logind[1709]: New session 17 of user core. Feb 13 16:00:28.016345 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:00:28.413347 sshd[6719]: Connection closed by 10.200.16.10 port 48820 Feb 13 16:00:28.413990 sshd-session[6701]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:28.417649 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:48820.service: Deactivated successfully. Feb 13 16:00:28.419743 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:00:28.420619 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:00:28.422151 systemd-logind[1709]: Removed session 17. Feb 13 16:00:33.496603 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:45476.service - OpenSSH per-connection server daemon (10.200.16.10:45476). Feb 13 16:00:33.931835 sshd[6762]: Accepted publickey for core from 10.200.16.10 port 45476 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:33.933601 sshd-session[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:33.938750 systemd-logind[1709]: New session 18 of user core. Feb 13 16:00:33.943512 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:00:34.339480 sshd[6764]: Connection closed by 10.200.16.10 port 45476 Feb 13 16:00:34.340261 sshd-session[6762]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:34.345086 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:45476.service: Deactivated successfully. Feb 13 16:00:34.348069 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:00:34.348947 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:00:34.350031 systemd-logind[1709]: Removed session 18. Feb 13 16:00:39.436084 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:55028.service - OpenSSH per-connection server daemon (10.200.16.10:55028). Feb 13 16:00:39.885616 sshd[6793]: Accepted publickey for core from 10.200.16.10 port 55028 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:39.887277 sshd-session[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:39.891485 systemd-logind[1709]: New session 19 of user core. Feb 13 16:00:39.899509 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:00:40.293516 sshd[6795]: Connection closed by 10.200.16.10 port 55028 Feb 13 16:00:40.294123 sshd-session[6793]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:40.297625 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:55028.service: Deactivated successfully. Feb 13 16:00:40.299253 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:00:40.300889 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:00:40.301855 systemd-logind[1709]: Removed session 19. Feb 13 16:00:40.370520 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:55034.service - OpenSSH per-connection server daemon (10.200.16.10:55034). Feb 13 16:00:40.810874 sshd[6807]: Accepted publickey for core from 10.200.16.10 port 55034 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:40.812325 sshd-session[6807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:40.816545 systemd-logind[1709]: New session 20 of user core. Feb 13 16:00:40.826510 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:00:41.319435 sshd[6809]: Connection closed by 10.200.16.10 port 55034 Feb 13 16:00:41.318931 sshd-session[6807]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:41.321907 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:00:41.322099 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:55034.service: Deactivated successfully. Feb 13 16:00:41.325050 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:00:41.327582 systemd-logind[1709]: Removed session 20. Feb 13 16:00:41.399291 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:55040.service - OpenSSH per-connection server daemon (10.200.16.10:55040). Feb 13 16:00:41.846750 sshd[6818]: Accepted publickey for core from 10.200.16.10 port 55040 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:41.848087 sshd-session[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:41.853348 systemd-logind[1709]: New session 21 of user core. Feb 13 16:00:41.856494 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:00:43.924930 sshd[6820]: Connection closed by 10.200.16.10 port 55040 Feb 13 16:00:43.927809 sshd-session[6818]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:43.932495 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:55040.service: Deactivated successfully. Feb 13 16:00:43.937060 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:00:43.941709 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:00:43.943265 systemd-logind[1709]: Removed session 21. Feb 13 16:00:44.009444 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:55044.service - OpenSSH per-connection server daemon (10.200.16.10:55044). Feb 13 16:00:44.467454 sshd[6839]: Accepted publickey for core from 10.200.16.10 port 55044 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:44.468983 sshd-session[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:44.473559 systemd-logind[1709]: New session 22 of user core. Feb 13 16:00:44.478533 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:00:44.993106 sshd[6841]: Connection closed by 10.200.16.10 port 55044 Feb 13 16:00:44.993973 sshd-session[6839]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:45.000730 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:55044.service: Deactivated successfully. Feb 13 16:00:45.006060 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:00:45.008037 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:00:45.009516 systemd-logind[1709]: Removed session 22. Feb 13 16:00:45.078461 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:55046.service - OpenSSH per-connection server daemon (10.200.16.10:55046). Feb 13 16:00:45.538046 sshd[6849]: Accepted publickey for core from 10.200.16.10 port 55046 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:45.539729 sshd-session[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:45.544364 systemd-logind[1709]: New session 23 of user core. Feb 13 16:00:45.551528 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:00:45.945041 sshd[6851]: Connection closed by 10.200.16.10 port 55046 Feb 13 16:00:45.945683 sshd-session[6849]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:45.949579 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:55046.service: Deactivated successfully. Feb 13 16:00:45.952003 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:00:45.953036 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:00:45.954230 systemd-logind[1709]: Removed session 23. Feb 13 16:00:51.031635 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:33500.service - OpenSSH per-connection server daemon (10.200.16.10:33500). Feb 13 16:00:51.462634 sshd[6873]: Accepted publickey for core from 10.200.16.10 port 33500 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:51.464411 sshd-session[6873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:51.469426 systemd-logind[1709]: New session 24 of user core. Feb 13 16:00:51.475627 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:00:51.858427 sshd[6875]: Connection closed by 10.200.16.10 port 33500 Feb 13 16:00:51.859079 sshd-session[6873]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:51.863730 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:33500.service: Deactivated successfully. Feb 13 16:00:51.866934 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:00:51.868265 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:00:51.869842 systemd-logind[1709]: Removed session 24. Feb 13 16:00:56.941691 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:33502.service - OpenSSH per-connection server daemon (10.200.16.10:33502). Feb 13 16:00:57.374253 sshd[6887]: Accepted publickey for core from 10.200.16.10 port 33502 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:00:57.375657 sshd-session[6887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:57.380127 systemd-logind[1709]: New session 25 of user core. Feb 13 16:00:57.390528 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:00:57.761863 sshd[6889]: Connection closed by 10.200.16.10 port 33502 Feb 13 16:00:57.762572 sshd-session[6887]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:57.766357 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:33502.service: Deactivated successfully. Feb 13 16:00:57.769079 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:00:57.770576 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:00:57.772015 systemd-logind[1709]: Removed session 25. Feb 13 16:01:02.856577 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:60618.service - OpenSSH per-connection server daemon (10.200.16.10:60618). Feb 13 16:01:03.347677 sshd[6942]: Accepted publickey for core from 10.200.16.10 port 60618 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:01:03.349269 sshd-session[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:01:03.354165 systemd-logind[1709]: New session 26 of user core. Feb 13 16:01:03.358556 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:01:03.762132 sshd[6944]: Connection closed by 10.200.16.10 port 60618 Feb 13 16:01:03.762734 sshd-session[6942]: pam_unix(sshd:session): session closed for user core Feb 13 16:01:03.766196 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:60618.service: Deactivated successfully. Feb 13 16:01:03.768223 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:01:03.769849 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:01:03.770986 systemd-logind[1709]: Removed session 26. Feb 13 16:01:07.772344 update_engine[1719]: I20250213 16:01:07.771837 1719 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 16:01:07.772344 update_engine[1719]: I20250213 16:01:07.771900 1719 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 16:01:07.772344 update_engine[1719]: I20250213 16:01:07.772114 1719 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.773123 1719 omaha_request_params.cc:62] Current group set to stable Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.774191 1719 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.774215 1719 update_attempter.cc:643] Scheduling an action processor start. Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.774235 1719 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.777041 1719 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.777184 1719 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.777191 1719 omaha_request_action.cc:272] Request: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.777197 1719 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.779657 1719 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:01:07.780491 update_engine[1719]: I20250213 16:01:07.780026 1719 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:01:07.780920 locksmithd[1783]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 16:01:07.882662 update_engine[1719]: E20250213 16:01:07.882496 1719 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:01:07.886879 update_engine[1719]: I20250213 16:01:07.886790 1719 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 16:01:08.850619 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:44930.service - OpenSSH per-connection server daemon (10.200.16.10:44930). Feb 13 16:01:09.320019 sshd[6956]: Accepted publickey for core from 10.200.16.10 port 44930 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:01:09.321585 sshd-session[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:01:09.327710 systemd-logind[1709]: New session 27 of user core. Feb 13 16:01:09.334563 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 16:01:09.720616 sshd[6961]: Connection closed by 10.200.16.10 port 44930 Feb 13 16:01:09.721164 sshd-session[6956]: pam_unix(sshd:session): session closed for user core Feb 13 16:01:09.725352 systemd-logind[1709]: Session 27 logged out. Waiting for processes to exit. Feb 13 16:01:09.725643 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:44930.service: Deactivated successfully. Feb 13 16:01:09.728002 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 16:01:09.729252 systemd-logind[1709]: Removed session 27. Feb 13 16:01:14.819592 systemd[1]: Started sshd@25-10.200.20.24:22-10.200.16.10:44942.service - OpenSSH per-connection server daemon (10.200.16.10:44942). Feb 13 16:01:15.305835 sshd[6977]: Accepted publickey for core from 10.200.16.10 port 44942 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:01:15.307308 sshd-session[6977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:01:15.312334 systemd-logind[1709]: New session 28 of user core. Feb 13 16:01:15.314545 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 16:01:15.717897 sshd[6979]: Connection closed by 10.200.16.10 port 44942 Feb 13 16:01:15.718530 sshd-session[6977]: pam_unix(sshd:session): session closed for user core Feb 13 16:01:15.722271 systemd[1]: sshd@25-10.200.20.24:22-10.200.16.10:44942.service: Deactivated successfully. Feb 13 16:01:15.724222 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 16:01:15.725034 systemd-logind[1709]: Session 28 logged out. Waiting for processes to exit. Feb 13 16:01:15.725939 systemd-logind[1709]: Removed session 28. Feb 13 16:01:17.768988 update_engine[1719]: I20250213 16:01:17.768745 1719 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:01:17.769395 update_engine[1719]: I20250213 16:01:17.769009 1719 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:01:17.769395 update_engine[1719]: I20250213 16:01:17.769284 1719 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:01:18.090089 update_engine[1719]: E20250213 16:01:18.089939 1719 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:01:18.090089 update_engine[1719]: I20250213 16:01:18.090046 1719 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 16:01:20.809970 systemd[1]: Started sshd@26-10.200.20.24:22-10.200.16.10:35452.service - OpenSSH per-connection server daemon (10.200.16.10:35452). Feb 13 16:01:21.297544 sshd[7002]: Accepted publickey for core from 10.200.16.10 port 35452 ssh2: RSA SHA256:ICUivcNh0aANIh+IPfqNd9W3RR/+laIAsgbf/G6em8c Feb 13 16:01:21.299252 sshd-session[7002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:01:21.303374 systemd-logind[1709]: New session 29 of user core. Feb 13 16:01:21.307504 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 16:01:21.707205 sshd[7004]: Connection closed by 10.200.16.10 port 35452 Feb 13 16:01:21.708028 sshd-session[7002]: pam_unix(sshd:session): session closed for user core Feb 13 16:01:21.712274 systemd[1]: sshd@26-10.200.20.24:22-10.200.16.10:35452.service: Deactivated successfully. Feb 13 16:01:21.713995 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 16:01:21.714779 systemd-logind[1709]: Session 29 logged out. Waiting for processes to exit. Feb 13 16:01:21.716143 systemd-logind[1709]: Removed session 29.