Feb 13 20:12:04.310208 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:12:04.310231 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 20:12:04.310240 kernel: KASLR enabled Feb 13 20:12:04.310245 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 20:12:04.310253 kernel: printk: bootconsole [pl11] enabled Feb 13 20:12:04.310258 kernel: efi: EFI v2.7 by EDK II Feb 13 20:12:04.310265 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 Feb 13 20:12:04.310271 kernel: random: crng init done Feb 13 20:12:04.310277 kernel: secureboot: Secure boot disabled Feb 13 20:12:04.310283 kernel: ACPI: Early table checksum verification disabled Feb 13 20:12:04.310289 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 20:12:04.310294 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310300 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310308 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 20:12:04.310316 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310322 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310328 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310336 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310342 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310348 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310354 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 20:12:04.310360 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310367 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 20:12:04.310373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 20:12:04.310379 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 20:12:04.310385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 20:12:04.310391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 20:12:04.310397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 20:12:04.310405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 20:12:04.310411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 20:12:04.310417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 20:12:04.310423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 20:12:04.310429 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 20:12:04.310436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 20:12:04.310442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 20:12:04.310448 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Feb 13 20:12:04.310454 kernel: Zone ranges: Feb 13 20:12:04.310460 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 20:12:04.310466 kernel: DMA32 empty Feb 13 20:12:04.310472 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:12:04.310483 kernel: Movable zone start for each node Feb 13 20:12:04.310489 kernel: Early memory node ranges Feb 13 20:12:04.310496 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 20:12:04.310503 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Feb 13 20:12:04.310509 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 20:12:04.310517 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 20:12:04.310524 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 20:12:04.310530 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 20:12:04.310537 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:12:04.310544 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 20:12:04.310550 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 20:12:04.310557 kernel: psci: probing for conduit method from ACPI. Feb 13 20:12:04.310563 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:12:04.310570 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:12:04.310576 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 20:12:04.310583 kernel: psci: SMC Calling Convention v1.4 Feb 13 20:12:04.310589 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 20:12:04.310597 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 20:12:04.310604 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:12:04.310610 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:12:04.310617 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 20:12:04.310623 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:12:04.310630 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:12:04.310637 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:12:04.310643 kernel: CPU features: detected: Spectre-BHB Feb 13 20:12:04.310649 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:12:04.312692 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:12:04.312713 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:12:04.312726 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 20:12:04.312733 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:12:04.312740 kernel: alternatives: applying boot alternatives Feb 13 20:12:04.312748 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 20:12:04.312756 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:12:04.312763 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:12:04.312769 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:12:04.312776 kernel: Fallback order for Node 0: 0 Feb 13 20:12:04.312783 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 20:12:04.312789 kernel: Policy zone: Normal Feb 13 20:12:04.312796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:12:04.312805 kernel: software IO TLB: area num 2. Feb 13 20:12:04.312811 kernel: software IO TLB: mapped [mem 0x0000000036630000-0x000000003a630000] (64MB) Feb 13 20:12:04.312818 kernel: Memory: 3982444K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 211716K reserved, 0K cma-reserved) Feb 13 20:12:04.312825 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:12:04.312832 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:12:04.312839 kernel: rcu: RCU event tracing is enabled. Feb 13 20:12:04.312846 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:12:04.312853 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:12:04.312860 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:12:04.312867 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:12:04.312873 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:12:04.312882 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:12:04.312888 kernel: GICv3: 960 SPIs implemented Feb 13 20:12:04.312895 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:12:04.312902 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:12:04.312908 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:12:04.312915 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 20:12:04.312921 kernel: ITS: No ITS available, not enabling LPIs Feb 13 20:12:04.312929 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:12:04.312935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:12:04.312942 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:12:04.312949 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:12:04.312955 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:12:04.312964 kernel: Console: colour dummy device 80x25 Feb 13 20:12:04.312971 kernel: printk: console [tty1] enabled Feb 13 20:12:04.312978 kernel: ACPI: Core revision 20230628 Feb 13 20:12:04.312985 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:12:04.312992 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:12:04.312999 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:12:04.313006 kernel: landlock: Up and running. Feb 13 20:12:04.313013 kernel: SELinux: Initializing. Feb 13 20:12:04.313020 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313028 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313035 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:12:04.313042 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:12:04.313049 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 20:12:04.313056 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 20:12:04.313063 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:12:04.313070 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:12:04.313084 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:12:04.313091 kernel: Remapping and enabling EFI services. Feb 13 20:12:04.313098 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:12:04.313105 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:12:04.313113 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 20:12:04.313121 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:12:04.313128 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:12:04.313136 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:12:04.313143 kernel: SMP: Total of 2 processors activated. Feb 13 20:12:04.313150 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:12:04.313159 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 20:12:04.313166 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:12:04.313178 kernel: CPU features: detected: CRC32 instructions Feb 13 20:12:04.313185 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:12:04.313192 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:12:04.313199 kernel: CPU features: detected: Privileged Access Never Feb 13 20:12:04.313207 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:12:04.313214 kernel: alternatives: applying system-wide alternatives Feb 13 20:12:04.313221 kernel: devtmpfs: initialized Feb 13 20:12:04.313230 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:12:04.313237 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:12:04.313245 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:12:04.313252 kernel: SMBIOS 3.1.0 present. Feb 13 20:12:04.313259 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 20:12:04.313266 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:12:04.313273 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:12:04.313281 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:12:04.313290 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:12:04.313297 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:12:04.313305 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 20:12:04.313312 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:12:04.313319 kernel: cpuidle: using governor menu Feb 13 20:12:04.313326 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:12:04.313334 kernel: ASID allocator initialised with 32768 entries Feb 13 20:12:04.313341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:12:04.313348 kernel: Serial: AMBA PL011 UART driver Feb 13 20:12:04.313356 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:12:04.313364 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:12:04.313371 kernel: Modules: 508960 pages in range for PLT usage Feb 13 20:12:04.313378 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:12:04.313385 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:12:04.313393 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:12:04.313400 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:12:04.313407 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:12:04.313414 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:12:04.313423 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:12:04.313430 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:12:04.313438 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:12:04.313445 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:12:04.313452 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:12:04.313459 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:12:04.313466 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:12:04.313473 kernel: ACPI: Interpreter enabled Feb 13 20:12:04.313480 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:12:04.313487 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:12:04.313496 kernel: printk: console [ttyAMA0] enabled Feb 13 20:12:04.313504 kernel: printk: bootconsole [pl11] disabled Feb 13 20:12:04.313511 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 20:12:04.313518 kernel: iommu: Default domain type: Translated Feb 13 20:12:04.313525 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:12:04.313532 kernel: efivars: Registered efivars operations Feb 13 20:12:04.313539 kernel: vgaarb: loaded Feb 13 20:12:04.313547 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:12:04.313554 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:12:04.313563 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:12:04.313570 kernel: pnp: PnP ACPI init Feb 13 20:12:04.313577 kernel: pnp: PnP ACPI: found 0 devices Feb 13 20:12:04.313584 kernel: NET: Registered PF_INET protocol family Feb 13 20:12:04.313592 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:12:04.313599 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:12:04.313606 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:12:04.313614 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:12:04.313621 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:12:04.313629 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:12:04.313637 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313644 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:12:04.313667 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:12:04.313675 kernel: kvm [1]: HYP mode not available Feb 13 20:12:04.313682 kernel: Initialise system trusted keyrings Feb 13 20:12:04.313689 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:12:04.313696 kernel: Key type asymmetric registered Feb 13 20:12:04.313705 kernel: Asymmetric key parser 'x509' registered Feb 13 20:12:04.313712 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:12:04.313720 kernel: io scheduler mq-deadline registered Feb 13 20:12:04.313727 kernel: io scheduler kyber registered Feb 13 20:12:04.313734 kernel: io scheduler bfq registered Feb 13 20:12:04.313741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:12:04.313748 kernel: thunder_xcv, ver 1.0 Feb 13 20:12:04.313755 kernel: thunder_bgx, ver 1.0 Feb 13 20:12:04.313762 kernel: nicpf, ver 1.0 Feb 13 20:12:04.313771 kernel: nicvf, ver 1.0 Feb 13 20:12:04.313921 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:12:04.313992 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:12:03 UTC (1739477523) Feb 13 20:12:04.314002 kernel: efifb: probing for efifb Feb 13 20:12:04.314010 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:12:04.314017 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:12:04.314024 kernel: efifb: scrolling: redraw Feb 13 20:12:04.314032 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:12:04.314041 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:12:04.314048 kernel: fb0: EFI VGA frame buffer device Feb 13 20:12:04.314055 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 20:12:04.314062 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:12:04.314070 kernel: No ACPI PMU IRQ for CPU0 Feb 13 20:12:04.314077 kernel: No ACPI PMU IRQ for CPU1 Feb 13 20:12:04.314084 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 20:12:04.314091 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:12:04.314098 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:12:04.314106 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:12:04.314114 kernel: Segment Routing with IPv6 Feb 13 20:12:04.314121 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:12:04.314128 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:12:04.314135 kernel: Key type dns_resolver registered Feb 13 20:12:04.314142 kernel: registered taskstats version 1 Feb 13 20:12:04.314149 kernel: Loading compiled-in X.509 certificates Feb 13 20:12:04.314156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 20:12:04.314164 kernel: Key type .fscrypt registered Feb 13 20:12:04.314172 kernel: Key type fscrypt-provisioning registered Feb 13 20:12:04.314179 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:12:04.314186 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:12:04.314194 kernel: ima: No architecture policies found Feb 13 20:12:04.314201 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:12:04.314208 kernel: clk: Disabling unused clocks Feb 13 20:12:04.314216 kernel: Freeing unused kernel memory: 39680K Feb 13 20:12:04.314223 kernel: Run /init as init process Feb 13 20:12:04.314231 kernel: with arguments: Feb 13 20:12:04.314239 kernel: /init Feb 13 20:12:04.314245 kernel: with environment: Feb 13 20:12:04.314252 kernel: HOME=/ Feb 13 20:12:04.314259 kernel: TERM=linux Feb 13 20:12:04.314266 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:12:04.314275 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:12:04.314284 systemd[1]: Detected virtualization microsoft. Feb 13 20:12:04.314294 systemd[1]: Detected architecture arm64. Feb 13 20:12:04.314302 systemd[1]: Running in initrd. Feb 13 20:12:04.314309 systemd[1]: No hostname configured, using default hostname. Feb 13 20:12:04.314316 systemd[1]: Hostname set to . Feb 13 20:12:04.314324 systemd[1]: Initializing machine ID from random generator. Feb 13 20:12:04.314332 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:12:04.314340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:12:04.314347 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:12:04.314357 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:12:04.314365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:12:04.314373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:12:04.314381 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:12:04.314390 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:12:04.314398 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:12:04.314406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:12:04.314415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:12:04.314423 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:12:04.314431 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:12:04.314438 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:12:04.314446 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:12:04.314454 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:12:04.314462 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:12:04.314469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:12:04.314477 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:12:04.314486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:12:04.314494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:12:04.314502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:12:04.314510 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:12:04.314517 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:12:04.314525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:12:04.314533 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:12:04.314540 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:12:04.314549 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:12:04.314557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:12:04.314584 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 20:12:04.314604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:04.314614 systemd-journald[218]: Journal started Feb 13 20:12:04.314632 systemd-journald[218]: Runtime Journal (/run/log/journal/d774e5b3b3754f739c91147c9ab42686) is 8.0M, max 78.5M, 70.5M free. Feb 13 20:12:04.326697 systemd-modules-load[219]: Inserted module 'overlay' Feb 13 20:12:04.351653 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:12:04.351699 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:12:04.357611 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:12:04.373538 kernel: Bridge firewalling registered Feb 13 20:12:04.360838 systemd-modules-load[219]: Inserted module 'br_netfilter' Feb 13 20:12:04.370157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:12:04.387928 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:12:04.397856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:12:04.414362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:04.434993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:12:04.451864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:12:04.467493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:12:04.490894 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:12:04.499688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:04.512447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:12:04.519153 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:12:04.539558 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:12:04.572955 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:12:04.581817 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:12:04.608845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:12:04.629297 dracut-cmdline[250]: dracut-dracut-053 Feb 13 20:12:04.629297 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 20:12:04.634122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:12:04.637381 systemd-resolved[252]: Positive Trust Anchors: Feb 13 20:12:04.637393 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:12:04.637424 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:12:04.639725 systemd-resolved[252]: Defaulting to hostname 'linux'. Feb 13 20:12:04.644256 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:12:04.683383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:12:04.820675 kernel: SCSI subsystem initialized Feb 13 20:12:04.827695 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:12:04.838691 kernel: iscsi: registered transport (tcp) Feb 13 20:12:04.856359 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:12:04.856390 kernel: QLogic iSCSI HBA Driver Feb 13 20:12:04.896051 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:12:04.914918 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:12:04.947676 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:12:04.947735 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:12:04.947754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:12:05.003694 kernel: raid6: neonx8 gen() 15779 MB/s Feb 13 20:12:05.021668 kernel: raid6: neonx4 gen() 15643 MB/s Feb 13 20:12:05.041681 kernel: raid6: neonx2 gen() 13220 MB/s Feb 13 20:12:05.064675 kernel: raid6: neonx1 gen() 10524 MB/s Feb 13 20:12:05.085678 kernel: raid6: int64x8 gen() 6947 MB/s Feb 13 20:12:05.105678 kernel: raid6: int64x4 gen() 7324 MB/s Feb 13 20:12:05.126672 kernel: raid6: int64x2 gen() 6131 MB/s Feb 13 20:12:05.149992 kernel: raid6: int64x1 gen() 5056 MB/s Feb 13 20:12:05.150010 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Feb 13 20:12:05.173773 kernel: raid6: .... xor() 11922 MB/s, rmw enabled Feb 13 20:12:05.173785 kernel: raid6: using neon recovery algorithm Feb 13 20:12:05.186107 kernel: xor: measuring software checksum speed Feb 13 20:12:05.186134 kernel: 8regs : 19702 MB/sec Feb 13 20:12:05.189641 kernel: 32regs : 19655 MB/sec Feb 13 20:12:05.193154 kernel: arm64_neon : 26795 MB/sec Feb 13 20:12:05.197270 kernel: xor: using function: arm64_neon (26795 MB/sec) Feb 13 20:12:05.247684 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:12:05.260040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:12:05.277805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:12:05.300216 systemd-udevd[437]: Using default interface naming scheme 'v255'. Feb 13 20:12:05.305861 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:12:05.322914 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:12:05.345475 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Feb 13 20:12:05.373161 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:12:05.391914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:12:05.431906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:12:05.455922 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:12:05.481510 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:12:05.495516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:12:05.509635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:12:05.524046 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:12:05.542222 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:12:05.567779 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 20:12:05.567809 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:12:05.559106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:12:05.593640 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:12:05.593670 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:12:05.559254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:05.642121 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:12:05.642149 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:12:05.642159 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:12:05.642168 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 13 20:12:05.642178 kernel: scsi host0: storvsc_host_t Feb 13 20:12:05.642213 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:12:05.642353 kernel: scsi host1: storvsc_host_t Feb 13 20:12:05.593546 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:12:05.692638 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:12:05.692827 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:12:05.692921 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 13 20:12:05.606318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:12:05.606544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:05.675387 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:05.701243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:05.712498 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:12:05.750998 kernel: PTP clock support registered Feb 13 20:12:05.740837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:12:05.778640 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: VF slot 1 added Feb 13 20:12:05.778828 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:12:05.778839 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:12:05.740943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:05.799466 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:12:05.799488 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:12:05.799505 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:12:05.800141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:05.507932 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:12:05.514529 kernel: hv_pci de6a4c17-36bc-4581-a183-8aa321b35769: PCI VMBus probing: Using version 0x10004 Feb 13 20:12:05.633294 systemd-journald[218]: Time jumped backwards, rotating. Feb 13 20:12:05.633367 kernel: hv_pci de6a4c17-36bc-4581-a183-8aa321b35769: PCI host bridge to bus 36bc:00 Feb 13 20:12:05.633477 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:12:05.633576 kernel: pci_bus 36bc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 20:12:05.633782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:12:05.633792 kernel: pci_bus 36bc:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:12:05.633874 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:12:05.633959 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:12:05.634049 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:12:05.634131 kernel: pci 36bc:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 20:12:05.634225 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:12:05.634305 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:12:05.634387 kernel: pci 36bc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:12:05.634470 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:12:05.634550 kernel: pci 36bc:00:02.0: enabling Extended Tags Feb 13 20:12:05.634629 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:05.635219 kernel: pci 36bc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 36bc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 20:12:05.635353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:12:05.635444 kernel: pci_bus 36bc:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:12:05.635528 kernel: pci 36bc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:12:05.481977 systemd-resolved[252]: Clock change detected. Flushing caches. Feb 13 20:12:05.605024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:05.621866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:12:05.671951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:05.704432 kernel: mlx5_core 36bc:00:02.0: enabling device (0000 -> 0002) Feb 13 20:12:05.924869 kernel: mlx5_core 36bc:00:02.0: firmware version: 16.30.1284 Feb 13 20:12:05.925007 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: VF registering: eth1 Feb 13 20:12:05.925127 kernel: mlx5_core 36bc:00:02.0 eth1: joined to eth0 Feb 13 20:12:05.925226 kernel: mlx5_core 36bc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 20:12:05.932675 kernel: mlx5_core 36bc:00:02.0 enP14012s1: renamed from eth1 Feb 13 20:12:06.126962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:12:06.284683 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (499) Feb 13 20:12:06.301023 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:12:06.342607 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:12:06.442659 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (500) Feb 13 20:12:06.456044 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:12:06.463362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:12:06.495970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:12:06.515754 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:06.522662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:07.531683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:07.532664 disk-uuid[607]: The operation has completed successfully. Feb 13 20:12:07.592045 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:12:07.592149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:12:07.619782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:12:07.634277 sh[693]: Success Feb 13 20:12:07.676697 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:12:07.991087 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:12:08.021757 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:12:08.028125 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:12:08.060023 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 20:12:08.060076 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:08.060086 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:12:08.071944 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:12:08.076436 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:12:08.747395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:12:08.753115 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:12:08.776959 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:12:08.784872 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:12:08.826927 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:08.826990 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:08.827001 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:12:08.848697 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:12:08.866982 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:12:08.872082 kernel: BTRFS info (device sda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:08.881215 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:12:08.896945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:12:08.921711 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:12:08.939813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:12:08.977069 systemd-networkd[877]: lo: Link UP Feb 13 20:12:08.977676 systemd-networkd[877]: lo: Gained carrier Feb 13 20:12:08.979386 systemd-networkd[877]: Enumeration completed Feb 13 20:12:08.979606 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:12:08.987463 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:12:08.987467 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:12:08.988521 systemd[1]: Reached target network.target - Network. Feb 13 20:12:09.077657 kernel: mlx5_core 36bc:00:02.0 enP14012s1: Link up Feb 13 20:12:09.120655 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: Data path switched to VF: enP14012s1 Feb 13 20:12:09.121419 systemd-networkd[877]: enP14012s1: Link UP Feb 13 20:12:09.121687 systemd-networkd[877]: eth0: Link UP Feb 13 20:12:09.122070 systemd-networkd[877]: eth0: Gained carrier Feb 13 20:12:09.122079 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:12:09.146222 systemd-networkd[877]: enP14012s1: Gained carrier Feb 13 20:12:09.160715 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:12:10.200782 systemd-networkd[877]: enP14012s1: Gained IPv6LL Feb 13 20:12:10.392863 ignition[864]: Ignition 2.20.0 Feb 13 20:12:10.392874 ignition[864]: Stage: fetch-offline Feb 13 20:12:10.397601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:12:10.392912 ignition[864]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.410824 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:12:10.392921 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.393010 ignition[864]: parsed url from cmdline: "" Feb 13 20:12:10.393013 ignition[864]: no config URL provided Feb 13 20:12:10.393018 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:12:10.393025 ignition[864]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:12:10.393029 ignition[864]: failed to fetch config: resource requires networking Feb 13 20:12:10.393193 ignition[864]: Ignition finished successfully Feb 13 20:12:10.440141 ignition[886]: Ignition 2.20.0 Feb 13 20:12:10.440148 ignition[886]: Stage: fetch Feb 13 20:12:10.440322 ignition[886]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.440331 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.440418 ignition[886]: parsed url from cmdline: "" Feb 13 20:12:10.440421 ignition[886]: no config URL provided Feb 13 20:12:10.440426 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:12:10.440433 ignition[886]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:12:10.440457 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:12:10.550674 ignition[886]: GET result: OK Feb 13 20:12:10.551294 ignition[886]: config has been read from IMDS userdata Feb 13 20:12:10.551336 ignition[886]: parsing config with SHA512: bb99bdaf762dc61d2380854f62a1fc3caf72af7c0c514b55290c71771ced0328562070eafee6e64729adcbdc29ae1a1d578577791a14227b06bcc06ed37627aa Feb 13 20:12:10.556350 unknown[886]: fetched base config from "system" Feb 13 20:12:10.556783 ignition[886]: fetch: fetch complete Feb 13 20:12:10.556358 unknown[886]: fetched base config from "system" Feb 13 20:12:10.556788 ignition[886]: fetch: fetch passed Feb 13 20:12:10.556364 unknown[886]: fetched user config from "azure" Feb 13 20:12:10.556848 ignition[886]: Ignition finished successfully Feb 13 20:12:10.560300 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:12:10.577907 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:12:10.608150 ignition[893]: Ignition 2.20.0 Feb 13 20:12:10.608163 ignition[893]: Stage: kargs Feb 13 20:12:10.616591 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:12:10.608346 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.608356 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.609411 ignition[893]: kargs: kargs passed Feb 13 20:12:10.633849 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:12:10.609462 ignition[893]: Ignition finished successfully Feb 13 20:12:10.658490 ignition[899]: Ignition 2.20.0 Feb 13 20:12:10.658497 ignition[899]: Stage: disks Feb 13 20:12:10.663679 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:12:10.658736 ignition[899]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.671971 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:12:10.658746 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.683260 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:12:10.659778 ignition[899]: disks: disks passed Feb 13 20:12:10.695057 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:12:10.659824 ignition[899]: Ignition finished successfully Feb 13 20:12:10.706991 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:12:10.712937 systemd-networkd[877]: eth0: Gained IPv6LL Feb 13 20:12:10.725949 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:12:10.741868 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:12:10.845321 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:12:10.851134 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:12:10.873748 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:12:10.931875 kernel: EXT4-fs (sda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 20:12:10.927613 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:12:10.934157 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:12:11.009724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:12:11.018859 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:12:11.024897 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:12:11.054025 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Feb 13 20:12:11.046530 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:12:11.092192 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:11.092215 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:11.092225 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:12:11.046568 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:12:11.069590 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:12:11.114657 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:12:11.118906 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:12:11.131991 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:12:12.059355 coreos-metadata[921]: Feb 13 20:12:12.059 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:12:12.070486 coreos-metadata[921]: Feb 13 20:12:12.070 INFO Fetch successful Feb 13 20:12:12.076580 coreos-metadata[921]: Feb 13 20:12:12.075 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:12:12.088556 coreos-metadata[921]: Feb 13 20:12:12.088 INFO Fetch successful Feb 13 20:12:12.094225 coreos-metadata[921]: Feb 13 20:12:12.094 INFO wrote hostname ci-4152.2.1-a-1780829b1e to /sysroot/etc/hostname Feb 13 20:12:12.103279 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:12:12.309340 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:12:12.365879 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:12:12.406901 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:12:12.415536 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:12:13.711428 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:12:13.734963 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:12:13.751621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:12:13.767685 kernel: BTRFS info (device sda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:13.763124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:12:13.791197 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:12:13.803775 ignition[1037]: INFO : Ignition 2.20.0 Feb 13 20:12:13.803775 ignition[1037]: INFO : Stage: mount Feb 13 20:12:13.813332 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:13.813332 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:13.813332 ignition[1037]: INFO : mount: mount passed Feb 13 20:12:13.813332 ignition[1037]: INFO : Ignition finished successfully Feb 13 20:12:13.811373 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:12:13.841867 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:12:13.859464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:12:13.888892 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050) Feb 13 20:12:13.888942 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:13.895105 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:13.900140 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:12:13.908664 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:12:13.910747 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:12:13.943270 ignition[1067]: INFO : Ignition 2.20.0 Feb 13 20:12:13.943270 ignition[1067]: INFO : Stage: files Feb 13 20:12:13.951445 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:13.951445 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:13.951445 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:12:13.990919 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:12:13.998615 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:12:14.121227 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:12:14.129139 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:12:14.129139 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:12:14.122157 unknown[1067]: wrote ssh authorized keys file for user: core Feb 13 20:12:14.155652 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:12:14.165910 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:12:14.165910 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:12:14.165910 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:12:14.220973 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:12:14.393717 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:12:14.393717 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:12:14.857883 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:12:15.080290 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:15.080290 ignition[1067]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: files passed Feb 13 20:12:15.105742 ignition[1067]: INFO : Ignition finished successfully Feb 13 20:12:15.100322 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:12:15.132983 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:12:15.148818 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:12:15.172044 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:12:15.298631 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:12:15.298631 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:12:15.173689 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:12:15.330914 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:12:15.205948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:12:15.215798 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:12:15.253914 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:12:15.307454 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:12:15.307595 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:12:15.325154 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:12:15.336755 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:12:15.354784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:12:15.381958 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:12:15.406435 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:12:15.439941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:12:15.465072 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:12:15.471691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:12:15.484909 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:12:15.496559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:12:15.496701 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:12:15.513109 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:12:15.525942 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:12:15.536930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:12:15.547821 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:12:15.560504 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:12:15.573346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:12:15.585171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:12:15.598387 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:12:15.611445 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:12:15.623872 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:12:15.634487 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:12:15.634686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:12:15.651143 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:12:15.657810 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:12:15.672130 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:12:15.678055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:12:15.686169 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:12:15.686349 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:12:15.704693 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:12:15.704877 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:12:15.719733 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:12:15.719880 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:12:15.730710 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:12:15.730859 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:12:15.764802 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:12:15.781883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:12:15.812471 ignition[1119]: INFO : Ignition 2.20.0 Feb 13 20:12:15.812471 ignition[1119]: INFO : Stage: umount Feb 13 20:12:15.812471 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:15.812471 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:15.812471 ignition[1119]: INFO : umount: umount passed Feb 13 20:12:15.812471 ignition[1119]: INFO : Ignition finished successfully Feb 13 20:12:15.782120 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:12:15.814461 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:12:15.827493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:12:15.827741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:12:15.841701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:12:15.841874 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:12:15.870554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:12:15.871396 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:12:15.871497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:12:15.881837 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:12:15.881932 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:12:15.899221 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:12:15.899321 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:12:15.915028 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:12:15.915104 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:12:15.930292 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:12:15.930358 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:12:15.942265 systemd[1]: Stopped target network.target - Network. Feb 13 20:12:15.956326 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:12:15.956405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:12:15.968543 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:12:15.978629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:12:15.983595 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:12:15.990733 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:12:16.002610 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:12:16.015624 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:12:16.015729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:12:16.027084 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:12:16.027124 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:12:16.038340 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:12:16.038399 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:12:16.050852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:12:16.050901 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:12:16.058189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:12:16.293003 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: Data path switched from VF: enP14012s1 Feb 13 20:12:16.071314 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:12:16.082677 systemd-networkd[877]: eth0: DHCPv6 lease lost Feb 13 20:12:16.083107 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:12:16.083194 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:12:16.089398 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:12:16.089492 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:12:16.101913 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:12:16.101985 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:12:16.115229 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:12:16.115296 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:12:16.152867 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:12:16.165754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:12:16.165838 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:12:16.180430 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:12:16.196385 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:12:16.196496 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:12:16.228129 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:12:16.228289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:12:16.240935 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:12:16.241020 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:12:16.252328 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:12:16.252380 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:12:16.264468 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:12:16.264525 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:12:16.293053 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:12:16.293128 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:12:16.305326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:12:16.305396 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:16.346924 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:12:16.363416 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:12:16.363489 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:12:16.375756 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:12:16.375819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:12:16.388007 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:12:16.601035 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 20:12:16.388058 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:12:16.400762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:12:16.400815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:12:16.414170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:12:16.414226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:16.428624 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:12:16.428745 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:12:16.443233 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:12:16.443361 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:12:16.455192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:12:16.489924 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:12:16.522957 systemd[1]: Switching root. Feb 13 20:12:16.617961 systemd-journald[218]: Journal stopped Feb 13 20:12:04.310208 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:12:04.310231 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 20:12:04.310240 kernel: KASLR enabled Feb 13 20:12:04.310245 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 20:12:04.310253 kernel: printk: bootconsole [pl11] enabled Feb 13 20:12:04.310258 kernel: efi: EFI v2.7 by EDK II Feb 13 20:12:04.310265 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 Feb 13 20:12:04.310271 kernel: random: crng init done Feb 13 20:12:04.310277 kernel: secureboot: Secure boot disabled Feb 13 20:12:04.310283 kernel: ACPI: Early table checksum verification disabled Feb 13 20:12:04.310289 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 20:12:04.310294 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310300 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310308 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 20:12:04.310316 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310322 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310328 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310336 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310342 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310348 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310354 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 20:12:04.310360 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:12:04.310367 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 20:12:04.310373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 20:12:04.310379 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 20:12:04.310385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 20:12:04.310391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 20:12:04.310397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 20:12:04.310405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 20:12:04.310411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 20:12:04.310417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 20:12:04.310423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 20:12:04.310429 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 20:12:04.310436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 20:12:04.310442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 20:12:04.310448 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Feb 13 20:12:04.310454 kernel: Zone ranges: Feb 13 20:12:04.310460 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 20:12:04.310466 kernel: DMA32 empty Feb 13 20:12:04.310472 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:12:04.310483 kernel: Movable zone start for each node Feb 13 20:12:04.310489 kernel: Early memory node ranges Feb 13 20:12:04.310496 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 20:12:04.310503 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Feb 13 20:12:04.310509 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 20:12:04.310517 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 20:12:04.310524 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 20:12:04.310530 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 20:12:04.310537 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:12:04.310544 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 20:12:04.310550 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 20:12:04.310557 kernel: psci: probing for conduit method from ACPI. Feb 13 20:12:04.310563 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:12:04.310570 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:12:04.310576 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 20:12:04.310583 kernel: psci: SMC Calling Convention v1.4 Feb 13 20:12:04.310589 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 20:12:04.310597 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 20:12:04.310604 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:12:04.310610 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:12:04.310617 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 20:12:04.310623 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:12:04.310630 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:12:04.310637 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:12:04.310643 kernel: CPU features: detected: Spectre-BHB Feb 13 20:12:04.310649 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:12:04.312692 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:12:04.312713 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:12:04.312726 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 20:12:04.312733 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:12:04.312740 kernel: alternatives: applying boot alternatives Feb 13 20:12:04.312748 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 20:12:04.312756 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:12:04.312763 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:12:04.312769 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:12:04.312776 kernel: Fallback order for Node 0: 0 Feb 13 20:12:04.312783 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 20:12:04.312789 kernel: Policy zone: Normal Feb 13 20:12:04.312796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:12:04.312805 kernel: software IO TLB: area num 2. Feb 13 20:12:04.312811 kernel: software IO TLB: mapped [mem 0x0000000036630000-0x000000003a630000] (64MB) Feb 13 20:12:04.312818 kernel: Memory: 3982444K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 211716K reserved, 0K cma-reserved) Feb 13 20:12:04.312825 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:12:04.312832 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:12:04.312839 kernel: rcu: RCU event tracing is enabled. Feb 13 20:12:04.312846 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:12:04.312853 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:12:04.312860 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:12:04.312867 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:12:04.312873 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:12:04.312882 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:12:04.312888 kernel: GICv3: 960 SPIs implemented Feb 13 20:12:04.312895 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:12:04.312902 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:12:04.312908 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:12:04.312915 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 20:12:04.312921 kernel: ITS: No ITS available, not enabling LPIs Feb 13 20:12:04.312929 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:12:04.312935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:12:04.312942 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:12:04.312949 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:12:04.312955 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:12:04.312964 kernel: Console: colour dummy device 80x25 Feb 13 20:12:04.312971 kernel: printk: console [tty1] enabled Feb 13 20:12:04.312978 kernel: ACPI: Core revision 20230628 Feb 13 20:12:04.312985 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:12:04.312992 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:12:04.312999 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:12:04.313006 kernel: landlock: Up and running. Feb 13 20:12:04.313013 kernel: SELinux: Initializing. Feb 13 20:12:04.313020 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313028 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313035 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:12:04.313042 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:12:04.313049 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 20:12:04.313056 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 20:12:04.313063 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:12:04.313070 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:12:04.313084 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:12:04.313091 kernel: Remapping and enabling EFI services. Feb 13 20:12:04.313098 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:12:04.313105 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:12:04.313113 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 20:12:04.313121 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:12:04.313128 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:12:04.313136 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:12:04.313143 kernel: SMP: Total of 2 processors activated. Feb 13 20:12:04.313150 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:12:04.313159 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 20:12:04.313166 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:12:04.313178 kernel: CPU features: detected: CRC32 instructions Feb 13 20:12:04.313185 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:12:04.313192 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:12:04.313199 kernel: CPU features: detected: Privileged Access Never Feb 13 20:12:04.313207 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:12:04.313214 kernel: alternatives: applying system-wide alternatives Feb 13 20:12:04.313221 kernel: devtmpfs: initialized Feb 13 20:12:04.313230 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:12:04.313237 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:12:04.313245 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:12:04.313252 kernel: SMBIOS 3.1.0 present. Feb 13 20:12:04.313259 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 20:12:04.313266 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:12:04.313273 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:12:04.313281 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:12:04.313290 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:12:04.313297 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:12:04.313305 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 20:12:04.313312 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:12:04.313319 kernel: cpuidle: using governor menu Feb 13 20:12:04.313326 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:12:04.313334 kernel: ASID allocator initialised with 32768 entries Feb 13 20:12:04.313341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:12:04.313348 kernel: Serial: AMBA PL011 UART driver Feb 13 20:12:04.313356 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:12:04.313364 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:12:04.313371 kernel: Modules: 508960 pages in range for PLT usage Feb 13 20:12:04.313378 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:12:04.313385 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:12:04.313393 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:12:04.313400 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:12:04.313407 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:12:04.313414 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:12:04.313423 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:12:04.313430 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:12:04.313438 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:12:04.313445 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:12:04.313452 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:12:04.313459 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:12:04.313466 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:12:04.313473 kernel: ACPI: Interpreter enabled Feb 13 20:12:04.313480 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:12:04.313487 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:12:04.313496 kernel: printk: console [ttyAMA0] enabled Feb 13 20:12:04.313504 kernel: printk: bootconsole [pl11] disabled Feb 13 20:12:04.313511 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 20:12:04.313518 kernel: iommu: Default domain type: Translated Feb 13 20:12:04.313525 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:12:04.313532 kernel: efivars: Registered efivars operations Feb 13 20:12:04.313539 kernel: vgaarb: loaded Feb 13 20:12:04.313547 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:12:04.313554 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:12:04.313563 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:12:04.313570 kernel: pnp: PnP ACPI init Feb 13 20:12:04.313577 kernel: pnp: PnP ACPI: found 0 devices Feb 13 20:12:04.313584 kernel: NET: Registered PF_INET protocol family Feb 13 20:12:04.313592 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:12:04.313599 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:12:04.313606 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:12:04.313614 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:12:04.313621 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:12:04.313629 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:12:04.313637 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313644 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:12:04.313651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:12:04.313667 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:12:04.313675 kernel: kvm [1]: HYP mode not available Feb 13 20:12:04.313682 kernel: Initialise system trusted keyrings Feb 13 20:12:04.313689 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:12:04.313696 kernel: Key type asymmetric registered Feb 13 20:12:04.313705 kernel: Asymmetric key parser 'x509' registered Feb 13 20:12:04.313712 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:12:04.313720 kernel: io scheduler mq-deadline registered Feb 13 20:12:04.313727 kernel: io scheduler kyber registered Feb 13 20:12:04.313734 kernel: io scheduler bfq registered Feb 13 20:12:04.313741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:12:04.313748 kernel: thunder_xcv, ver 1.0 Feb 13 20:12:04.313755 kernel: thunder_bgx, ver 1.0 Feb 13 20:12:04.313762 kernel: nicpf, ver 1.0 Feb 13 20:12:04.313771 kernel: nicvf, ver 1.0 Feb 13 20:12:04.313921 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:12:04.313992 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:12:03 UTC (1739477523) Feb 13 20:12:04.314002 kernel: efifb: probing for efifb Feb 13 20:12:04.314010 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:12:04.314017 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:12:04.314024 kernel: efifb: scrolling: redraw Feb 13 20:12:04.314032 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:12:04.314041 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:12:04.314048 kernel: fb0: EFI VGA frame buffer device Feb 13 20:12:04.314055 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 20:12:04.314062 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:12:04.314070 kernel: No ACPI PMU IRQ for CPU0 Feb 13 20:12:04.314077 kernel: No ACPI PMU IRQ for CPU1 Feb 13 20:12:04.314084 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 20:12:04.314091 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:12:04.314098 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:12:04.314106 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:12:04.314114 kernel: Segment Routing with IPv6 Feb 13 20:12:04.314121 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:12:04.314128 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:12:04.314135 kernel: Key type dns_resolver registered Feb 13 20:12:04.314142 kernel: registered taskstats version 1 Feb 13 20:12:04.314149 kernel: Loading compiled-in X.509 certificates Feb 13 20:12:04.314156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 20:12:04.314164 kernel: Key type .fscrypt registered Feb 13 20:12:04.314172 kernel: Key type fscrypt-provisioning registered Feb 13 20:12:04.314179 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:12:04.314186 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:12:04.314194 kernel: ima: No architecture policies found Feb 13 20:12:04.314201 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:12:04.314208 kernel: clk: Disabling unused clocks Feb 13 20:12:04.314216 kernel: Freeing unused kernel memory: 39680K Feb 13 20:12:04.314223 kernel: Run /init as init process Feb 13 20:12:04.314231 kernel: with arguments: Feb 13 20:12:04.314239 kernel: /init Feb 13 20:12:04.314245 kernel: with environment: Feb 13 20:12:04.314252 kernel: HOME=/ Feb 13 20:12:04.314259 kernel: TERM=linux Feb 13 20:12:04.314266 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:12:04.314275 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:12:04.314284 systemd[1]: Detected virtualization microsoft. Feb 13 20:12:04.314294 systemd[1]: Detected architecture arm64. Feb 13 20:12:04.314302 systemd[1]: Running in initrd. Feb 13 20:12:04.314309 systemd[1]: No hostname configured, using default hostname. Feb 13 20:12:04.314316 systemd[1]: Hostname set to . Feb 13 20:12:04.314324 systemd[1]: Initializing machine ID from random generator. Feb 13 20:12:04.314332 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:12:04.314340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:12:04.314347 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:12:04.314357 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:12:04.314365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:12:04.314373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:12:04.314381 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:12:04.314390 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:12:04.314398 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:12:04.314406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:12:04.314415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:12:04.314423 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:12:04.314431 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:12:04.314438 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:12:04.314446 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:12:04.314454 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:12:04.314462 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:12:04.314469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:12:04.314477 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:12:04.314486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:12:04.314494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:12:04.314502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:12:04.314510 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:12:04.314517 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:12:04.314525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:12:04.314533 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:12:04.314540 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:12:04.314549 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:12:04.314557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:12:04.314584 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 20:12:04.314604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:04.314614 systemd-journald[218]: Journal started Feb 13 20:12:04.314632 systemd-journald[218]: Runtime Journal (/run/log/journal/d774e5b3b3754f739c91147c9ab42686) is 8.0M, max 78.5M, 70.5M free. Feb 13 20:12:04.326697 systemd-modules-load[219]: Inserted module 'overlay' Feb 13 20:12:04.351653 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:12:04.351699 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:12:04.357611 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:12:04.373538 kernel: Bridge firewalling registered Feb 13 20:12:04.360838 systemd-modules-load[219]: Inserted module 'br_netfilter' Feb 13 20:12:04.370157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:12:04.387928 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:12:04.397856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:12:04.414362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:04.434993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:12:04.451864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:12:04.467493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:12:04.490894 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:12:04.499688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:04.512447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:12:04.519153 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:12:04.539558 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:12:04.572955 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:12:04.581817 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:12:04.608845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:12:04.629297 dracut-cmdline[250]: dracut-dracut-053 Feb 13 20:12:04.629297 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 20:12:04.634122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:12:04.637381 systemd-resolved[252]: Positive Trust Anchors: Feb 13 20:12:04.637393 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:12:04.637424 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:12:04.639725 systemd-resolved[252]: Defaulting to hostname 'linux'. Feb 13 20:12:04.644256 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:12:04.683383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:12:04.820675 kernel: SCSI subsystem initialized Feb 13 20:12:04.827695 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:12:04.838691 kernel: iscsi: registered transport (tcp) Feb 13 20:12:04.856359 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:12:04.856390 kernel: QLogic iSCSI HBA Driver Feb 13 20:12:04.896051 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:12:04.914918 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:12:04.947676 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:12:04.947735 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:12:04.947754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:12:05.003694 kernel: raid6: neonx8 gen() 15779 MB/s Feb 13 20:12:05.021668 kernel: raid6: neonx4 gen() 15643 MB/s Feb 13 20:12:05.041681 kernel: raid6: neonx2 gen() 13220 MB/s Feb 13 20:12:05.064675 kernel: raid6: neonx1 gen() 10524 MB/s Feb 13 20:12:05.085678 kernel: raid6: int64x8 gen() 6947 MB/s Feb 13 20:12:05.105678 kernel: raid6: int64x4 gen() 7324 MB/s Feb 13 20:12:05.126672 kernel: raid6: int64x2 gen() 6131 MB/s Feb 13 20:12:05.149992 kernel: raid6: int64x1 gen() 5056 MB/s Feb 13 20:12:05.150010 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Feb 13 20:12:05.173773 kernel: raid6: .... xor() 11922 MB/s, rmw enabled Feb 13 20:12:05.173785 kernel: raid6: using neon recovery algorithm Feb 13 20:12:05.186107 kernel: xor: measuring software checksum speed Feb 13 20:12:05.186134 kernel: 8regs : 19702 MB/sec Feb 13 20:12:05.189641 kernel: 32regs : 19655 MB/sec Feb 13 20:12:05.193154 kernel: arm64_neon : 26795 MB/sec Feb 13 20:12:05.197270 kernel: xor: using function: arm64_neon (26795 MB/sec) Feb 13 20:12:05.247684 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:12:05.260040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:12:05.277805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:12:05.300216 systemd-udevd[437]: Using default interface naming scheme 'v255'. Feb 13 20:12:05.305861 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:12:05.322914 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:12:05.345475 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Feb 13 20:12:05.373161 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:12:05.391914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:12:05.431906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:12:05.455922 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:12:05.481510 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:12:05.495516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:12:05.509635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:12:05.524046 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:12:05.542222 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:12:05.567779 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 20:12:05.567809 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:12:05.559106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:12:05.593640 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:12:05.593670 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:12:05.559254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:05.642121 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:12:05.642149 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:12:05.642159 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:12:05.642168 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 13 20:12:05.642178 kernel: scsi host0: storvsc_host_t Feb 13 20:12:05.642213 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:12:05.642353 kernel: scsi host1: storvsc_host_t Feb 13 20:12:05.593546 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:12:05.692638 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:12:05.692827 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:12:05.692921 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 13 20:12:05.606318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:12:05.606544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:05.675387 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:05.701243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:05.712498 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:12:05.750998 kernel: PTP clock support registered Feb 13 20:12:05.740837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:12:05.778640 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: VF slot 1 added Feb 13 20:12:05.778828 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:12:05.778839 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:12:05.740943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:05.799466 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:12:05.799488 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:12:05.799505 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:12:05.800141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:05.507932 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:12:05.514529 kernel: hv_pci de6a4c17-36bc-4581-a183-8aa321b35769: PCI VMBus probing: Using version 0x10004 Feb 13 20:12:05.633294 systemd-journald[218]: Time jumped backwards, rotating. Feb 13 20:12:05.633367 kernel: hv_pci de6a4c17-36bc-4581-a183-8aa321b35769: PCI host bridge to bus 36bc:00 Feb 13 20:12:05.633477 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:12:05.633576 kernel: pci_bus 36bc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 20:12:05.633782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:12:05.633792 kernel: pci_bus 36bc:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:12:05.633874 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:12:05.633959 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:12:05.634049 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:12:05.634131 kernel: pci 36bc:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 20:12:05.634225 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:12:05.634305 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:12:05.634387 kernel: pci 36bc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:12:05.634470 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:12:05.634550 kernel: pci 36bc:00:02.0: enabling Extended Tags Feb 13 20:12:05.634629 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:05.635219 kernel: pci 36bc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 36bc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 20:12:05.635353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:12:05.635444 kernel: pci_bus 36bc:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:12:05.635528 kernel: pci 36bc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:12:05.481977 systemd-resolved[252]: Clock change detected. Flushing caches. Feb 13 20:12:05.605024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:05.621866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:12:05.671951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:05.704432 kernel: mlx5_core 36bc:00:02.0: enabling device (0000 -> 0002) Feb 13 20:12:05.924869 kernel: mlx5_core 36bc:00:02.0: firmware version: 16.30.1284 Feb 13 20:12:05.925007 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: VF registering: eth1 Feb 13 20:12:05.925127 kernel: mlx5_core 36bc:00:02.0 eth1: joined to eth0 Feb 13 20:12:05.925226 kernel: mlx5_core 36bc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 20:12:05.932675 kernel: mlx5_core 36bc:00:02.0 enP14012s1: renamed from eth1 Feb 13 20:12:06.126962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:12:06.284683 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (499) Feb 13 20:12:06.301023 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:12:06.342607 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:12:06.442659 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (500) Feb 13 20:12:06.456044 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:12:06.463362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:12:06.495970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:12:06.515754 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:06.522662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:07.531683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:12:07.532664 disk-uuid[607]: The operation has completed successfully. Feb 13 20:12:07.592045 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:12:07.592149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:12:07.619782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:12:07.634277 sh[693]: Success Feb 13 20:12:07.676697 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:12:07.991087 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:12:08.021757 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:12:08.028125 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:12:08.060023 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 20:12:08.060076 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:08.060086 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:12:08.071944 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:12:08.076436 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:12:08.747395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:12:08.753115 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:12:08.776959 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:12:08.784872 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:12:08.826927 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:08.826990 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:08.827001 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:12:08.848697 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:12:08.866982 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:12:08.872082 kernel: BTRFS info (device sda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:08.881215 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:12:08.896945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:12:08.921711 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:12:08.939813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:12:08.977069 systemd-networkd[877]: lo: Link UP Feb 13 20:12:08.977676 systemd-networkd[877]: lo: Gained carrier Feb 13 20:12:08.979386 systemd-networkd[877]: Enumeration completed Feb 13 20:12:08.979606 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:12:08.987463 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:12:08.987467 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:12:08.988521 systemd[1]: Reached target network.target - Network. Feb 13 20:12:09.077657 kernel: mlx5_core 36bc:00:02.0 enP14012s1: Link up Feb 13 20:12:09.120655 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: Data path switched to VF: enP14012s1 Feb 13 20:12:09.121419 systemd-networkd[877]: enP14012s1: Link UP Feb 13 20:12:09.121687 systemd-networkd[877]: eth0: Link UP Feb 13 20:12:09.122070 systemd-networkd[877]: eth0: Gained carrier Feb 13 20:12:09.122079 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:12:09.146222 systemd-networkd[877]: enP14012s1: Gained carrier Feb 13 20:12:09.160715 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:12:10.200782 systemd-networkd[877]: enP14012s1: Gained IPv6LL Feb 13 20:12:10.392863 ignition[864]: Ignition 2.20.0 Feb 13 20:12:10.392874 ignition[864]: Stage: fetch-offline Feb 13 20:12:10.397601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:12:10.392912 ignition[864]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.410824 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:12:10.392921 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.393010 ignition[864]: parsed url from cmdline: "" Feb 13 20:12:10.393013 ignition[864]: no config URL provided Feb 13 20:12:10.393018 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:12:10.393025 ignition[864]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:12:10.393029 ignition[864]: failed to fetch config: resource requires networking Feb 13 20:12:10.393193 ignition[864]: Ignition finished successfully Feb 13 20:12:10.440141 ignition[886]: Ignition 2.20.0 Feb 13 20:12:10.440148 ignition[886]: Stage: fetch Feb 13 20:12:10.440322 ignition[886]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.440331 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.440418 ignition[886]: parsed url from cmdline: "" Feb 13 20:12:10.440421 ignition[886]: no config URL provided Feb 13 20:12:10.440426 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:12:10.440433 ignition[886]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:12:10.440457 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:12:10.550674 ignition[886]: GET result: OK Feb 13 20:12:10.551294 ignition[886]: config has been read from IMDS userdata Feb 13 20:12:10.551336 ignition[886]: parsing config with SHA512: bb99bdaf762dc61d2380854f62a1fc3caf72af7c0c514b55290c71771ced0328562070eafee6e64729adcbdc29ae1a1d578577791a14227b06bcc06ed37627aa Feb 13 20:12:10.556350 unknown[886]: fetched base config from "system" Feb 13 20:12:10.556783 ignition[886]: fetch: fetch complete Feb 13 20:12:10.556358 unknown[886]: fetched base config from "system" Feb 13 20:12:10.556788 ignition[886]: fetch: fetch passed Feb 13 20:12:10.556364 unknown[886]: fetched user config from "azure" Feb 13 20:12:10.556848 ignition[886]: Ignition finished successfully Feb 13 20:12:10.560300 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:12:10.577907 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:12:10.608150 ignition[893]: Ignition 2.20.0 Feb 13 20:12:10.608163 ignition[893]: Stage: kargs Feb 13 20:12:10.616591 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:12:10.608346 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.608356 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.609411 ignition[893]: kargs: kargs passed Feb 13 20:12:10.633849 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:12:10.609462 ignition[893]: Ignition finished successfully Feb 13 20:12:10.658490 ignition[899]: Ignition 2.20.0 Feb 13 20:12:10.658497 ignition[899]: Stage: disks Feb 13 20:12:10.663679 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:12:10.658736 ignition[899]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:10.671971 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:12:10.658746 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:10.683260 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:12:10.659778 ignition[899]: disks: disks passed Feb 13 20:12:10.695057 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:12:10.659824 ignition[899]: Ignition finished successfully Feb 13 20:12:10.706991 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:12:10.712937 systemd-networkd[877]: eth0: Gained IPv6LL Feb 13 20:12:10.725949 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:12:10.741868 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:12:10.845321 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:12:10.851134 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:12:10.873748 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:12:10.931875 kernel: EXT4-fs (sda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 20:12:10.927613 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:12:10.934157 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:12:11.009724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:12:11.018859 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:12:11.024897 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:12:11.054025 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Feb 13 20:12:11.046530 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:12:11.092192 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:11.092215 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:11.092225 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:12:11.046568 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:12:11.069590 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:12:11.114657 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:12:11.118906 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:12:11.131991 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:12:12.059355 coreos-metadata[921]: Feb 13 20:12:12.059 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:12:12.070486 coreos-metadata[921]: Feb 13 20:12:12.070 INFO Fetch successful Feb 13 20:12:12.076580 coreos-metadata[921]: Feb 13 20:12:12.075 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:12:12.088556 coreos-metadata[921]: Feb 13 20:12:12.088 INFO Fetch successful Feb 13 20:12:12.094225 coreos-metadata[921]: Feb 13 20:12:12.094 INFO wrote hostname ci-4152.2.1-a-1780829b1e to /sysroot/etc/hostname Feb 13 20:12:12.103279 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:12:12.309340 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:12:12.365879 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:12:12.406901 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:12:12.415536 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:12:13.711428 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:12:13.734963 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:12:13.751621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:12:13.767685 kernel: BTRFS info (device sda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:13.763124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:12:13.791197 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:12:13.803775 ignition[1037]: INFO : Ignition 2.20.0 Feb 13 20:12:13.803775 ignition[1037]: INFO : Stage: mount Feb 13 20:12:13.813332 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:13.813332 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:13.813332 ignition[1037]: INFO : mount: mount passed Feb 13 20:12:13.813332 ignition[1037]: INFO : Ignition finished successfully Feb 13 20:12:13.811373 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:12:13.841867 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:12:13.859464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:12:13.888892 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050) Feb 13 20:12:13.888942 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 20:12:13.895105 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:12:13.900140 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:12:13.908664 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:12:13.910747 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:12:13.943270 ignition[1067]: INFO : Ignition 2.20.0 Feb 13 20:12:13.943270 ignition[1067]: INFO : Stage: files Feb 13 20:12:13.951445 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:13.951445 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:13.951445 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:12:13.990919 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:12:13.998615 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:12:14.121227 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:12:14.129139 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:12:14.129139 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:12:14.122157 unknown[1067]: wrote ssh authorized keys file for user: core Feb 13 20:12:14.155652 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:12:14.165910 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:12:14.165910 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:12:14.165910 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:12:14.220973 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:12:14.393717 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:12:14.393717 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:14.417085 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:12:14.857883 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:12:15.080290 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:12:15.080290 ignition[1067]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:12:15.105742 ignition[1067]: INFO : files: files passed Feb 13 20:12:15.105742 ignition[1067]: INFO : Ignition finished successfully Feb 13 20:12:15.100322 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:12:15.132983 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:12:15.148818 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:12:15.172044 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:12:15.298631 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:12:15.298631 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:12:15.173689 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:12:15.330914 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:12:15.205948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:12:15.215798 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:12:15.253914 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:12:15.307454 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:12:15.307595 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:12:15.325154 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:12:15.336755 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:12:15.354784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:12:15.381958 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:12:15.406435 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:12:15.439941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:12:15.465072 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:12:15.471691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:12:15.484909 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:12:15.496559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:12:15.496701 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:12:15.513109 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:12:15.525942 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:12:15.536930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:12:15.547821 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:12:15.560504 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:12:15.573346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:12:15.585171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:12:15.598387 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:12:15.611445 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:12:15.623872 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:12:15.634487 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:12:15.634686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:12:15.651143 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:12:15.657810 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:12:15.672130 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:12:15.678055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:12:15.686169 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:12:15.686349 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:12:15.704693 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:12:15.704877 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:12:15.719733 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:12:15.719880 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:12:15.730710 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:12:15.730859 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:12:15.764802 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:12:15.781883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:12:15.812471 ignition[1119]: INFO : Ignition 2.20.0 Feb 13 20:12:15.812471 ignition[1119]: INFO : Stage: umount Feb 13 20:12:15.812471 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:12:15.812471 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:12:15.812471 ignition[1119]: INFO : umount: umount passed Feb 13 20:12:15.812471 ignition[1119]: INFO : Ignition finished successfully Feb 13 20:12:15.782120 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:12:15.814461 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:12:15.827493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:12:15.827741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:12:15.841701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:12:15.841874 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:12:15.870554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:12:15.871396 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:12:15.871497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:12:15.881837 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:12:15.881932 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:12:15.899221 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:12:15.899321 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:12:15.915028 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:12:15.915104 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:12:15.930292 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:12:15.930358 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:12:15.942265 systemd[1]: Stopped target network.target - Network. Feb 13 20:12:15.956326 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:12:15.956405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:12:15.968543 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:12:15.978629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:12:15.983595 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:12:15.990733 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:12:16.002610 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:12:16.015624 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:12:16.015729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:12:16.027084 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:12:16.027124 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:12:16.038340 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:12:16.038399 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:12:16.050852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:12:16.050901 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:12:16.058189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:12:16.293003 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: Data path switched from VF: enP14012s1 Feb 13 20:12:16.071314 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:12:16.082677 systemd-networkd[877]: eth0: DHCPv6 lease lost Feb 13 20:12:16.083107 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:12:16.083194 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:12:16.089398 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:12:16.089492 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:12:16.101913 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:12:16.101985 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:12:16.115229 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:12:16.115296 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:12:16.152867 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:12:16.165754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:12:16.165838 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:12:16.180430 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:12:16.196385 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:12:16.196496 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:12:16.228129 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:12:16.228289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:12:16.240935 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:12:16.241020 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:12:16.252328 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:12:16.252380 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:12:16.264468 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:12:16.264525 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:12:16.293053 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:12:16.293128 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:12:16.305326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:12:16.305396 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:12:16.346924 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:12:16.363416 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:12:16.363489 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:12:16.375756 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:12:16.375819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:12:16.388007 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:12:16.601035 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 20:12:16.388058 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:12:16.400762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:12:16.400815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:12:16.414170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:12:16.414226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:16.428624 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:12:16.428745 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:12:16.443233 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:12:16.443361 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:12:16.455192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:12:16.489924 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:12:16.522957 systemd[1]: Switching root. Feb 13 20:12:16.617961 systemd-journald[218]: Journal stopped Feb 13 20:12:21.798860 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:12:21.798885 kernel: SELinux: policy capability open_perms=1 Feb 13 20:12:21.798895 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:12:21.798902 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:12:21.798912 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:12:21.798919 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:12:21.798928 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:12:21.798936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:12:21.798946 kernel: audit: type=1403 audit(1739477537.848:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:12:21.798956 systemd[1]: Successfully loaded SELinux policy in 86.773ms. Feb 13 20:12:21.798967 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.315ms. Feb 13 20:12:21.798976 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:12:21.798985 systemd[1]: Detected virtualization microsoft. Feb 13 20:12:21.798994 systemd[1]: Detected architecture arm64. Feb 13 20:12:21.799003 systemd[1]: Detected first boot. Feb 13 20:12:21.799014 systemd[1]: Hostname set to . Feb 13 20:12:21.799022 systemd[1]: Initializing machine ID from random generator. Feb 13 20:12:21.799031 zram_generator::config[1178]: No configuration found. Feb 13 20:12:21.799041 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:12:21.799050 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:12:21.799058 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:12:21.799068 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:12:21.799078 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:12:21.799088 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:12:21.799097 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:12:21.799106 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:12:21.799115 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:12:21.799124 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:12:21.799133 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:12:21.799145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:12:21.799154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:12:21.799163 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:12:21.799172 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:12:21.799181 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:12:21.799190 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:12:21.799199 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:12:21.799208 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:12:21.799219 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:12:21.799228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:12:21.799237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:12:21.799248 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:12:21.799258 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:12:21.799267 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:12:21.799277 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:12:21.799286 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:12:21.799297 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:12:21.799306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:12:21.799315 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:12:21.799324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:12:21.799334 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:12:21.799346 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:12:21.799357 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:12:21.799366 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:12:21.799375 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:12:21.799385 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:12:21.799394 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:12:21.799403 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:12:21.799413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:12:21.799424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:12:21.799433 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:12:21.799443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:12:21.799452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:12:21.799461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:12:21.799471 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:12:21.799480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:12:21.799490 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:12:21.799501 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:12:21.799511 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:12:21.799520 kernel: fuse: init (API version 7.39) Feb 13 20:12:21.799528 kernel: loop: module loaded Feb 13 20:12:21.799537 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:12:21.799547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:12:21.799556 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:12:21.799565 kernel: ACPI: bus type drm_connector registered Feb 13 20:12:21.799575 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:12:21.799600 systemd-journald[1297]: Collecting audit messages is disabled. Feb 13 20:12:21.799620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:12:21.799630 systemd-journald[1297]: Journal started Feb 13 20:12:21.799659 systemd-journald[1297]: Runtime Journal (/run/log/journal/766c44510f3e4e9f888d296ee02c3060) is 8.0M, max 78.5M, 70.5M free. Feb 13 20:12:21.824874 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:12:21.826020 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:12:21.832060 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:12:21.838536 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:12:21.844358 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:12:21.850661 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:12:21.859283 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:12:21.865476 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:12:21.872616 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:12:21.880310 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:12:21.880479 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:12:21.887743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:12:21.887909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:12:21.894810 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:12:21.894983 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:12:21.903268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:12:21.903429 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:12:21.911272 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:12:21.911429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:12:21.918619 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:12:21.918860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:12:21.925345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:12:21.932354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:12:21.939897 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:12:21.949931 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:12:21.968369 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:12:21.981760 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:12:22.000811 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:12:22.007813 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:12:22.016728 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:12:22.024819 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:12:22.031696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:12:22.034681 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:12:22.042526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:12:22.044840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:12:22.052802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:12:22.070861 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:12:22.084934 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:12:22.095832 systemd-journald[1297]: Time spent on flushing to /var/log/journal/766c44510f3e4e9f888d296ee02c3060 is 58.305ms for 887 entries. Feb 13 20:12:22.095832 systemd-journald[1297]: System Journal (/var/log/journal/766c44510f3e4e9f888d296ee02c3060) is 11.8M, max 2.6G, 2.6G free. Feb 13 20:12:22.203667 systemd-journald[1297]: Received client request to flush runtime journal. Feb 13 20:12:22.203753 systemd-journald[1297]: /var/log/journal/766c44510f3e4e9f888d296ee02c3060/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Feb 13 20:12:22.203782 systemd-journald[1297]: Rotating system journal. Feb 13 20:12:22.102964 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:12:22.111281 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:12:22.122584 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:12:22.142809 udevadm[1338]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:12:22.151092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:12:22.206368 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:12:22.359063 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Feb 13 20:12:22.359082 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Feb 13 20:12:22.363769 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:12:22.375881 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:12:22.560720 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:12:22.575902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:12:22.595549 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Feb 13 20:12:22.595576 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Feb 13 20:12:22.602167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:12:25.150163 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:12:25.168967 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:12:25.189373 systemd-udevd[1364]: Using default interface naming scheme 'v255'. Feb 13 20:12:25.462145 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:12:25.477907 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:12:25.540079 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 20:12:25.779972 kernel: hv_vmbus: registering driver hv_balloon Feb 13 20:12:25.780069 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 20:12:25.784654 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 20:12:25.787120 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:12:25.828748 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 20:12:25.828845 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:12:25.828877 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 20:12:25.833387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:25.847731 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 20:12:25.854268 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:12:25.852184 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:12:25.869476 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:12:26.152931 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:12:26.153204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:26.167866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:12:26.218707 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1378) Feb 13 20:12:26.243533 systemd-networkd[1373]: lo: Link UP Feb 13 20:12:26.243555 systemd-networkd[1373]: lo: Gained carrier Feb 13 20:12:26.247233 systemd-networkd[1373]: Enumeration completed Feb 13 20:12:26.247584 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:12:26.247594 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:12:26.247836 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:12:26.268694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:12:26.314134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:12:26.323932 kernel: mlx5_core 36bc:00:02.0 enP14012s1: Link up Feb 13 20:12:26.350665 kernel: hv_netvsc 0022487c-8f0d-0022-487c-8f0d0022487c eth0: Data path switched to VF: enP14012s1 Feb 13 20:12:26.351590 systemd-networkd[1373]: enP14012s1: Link UP Feb 13 20:12:26.351724 systemd-networkd[1373]: eth0: Link UP Feb 13 20:12:26.351732 systemd-networkd[1373]: eth0: Gained carrier Feb 13 20:12:26.351747 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:12:26.355901 systemd-networkd[1373]: enP14012s1: Gained carrier Feb 13 20:12:26.365679 systemd-networkd[1373]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:12:26.569212 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:12:26.587871 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:12:26.620574 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:12:26.856622 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:12:26.865000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:12:26.877833 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:12:26.881881 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:12:26.906494 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:12:26.913776 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:12:26.921078 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:12:26.921113 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:12:26.926877 systemd[1]: Reached target machines.target - Containers. Feb 13 20:12:26.933215 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:12:26.946776 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:12:26.954965 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:12:26.961791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:12:26.963016 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:12:26.984860 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:12:26.993153 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:12:27.004867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:12:27.012390 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:12:27.020610 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:12:27.053659 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 20:12:27.480804 systemd-networkd[1373]: eth0: Gained IPv6LL Feb 13 20:12:27.482699 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:12:27.971843 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:12:28.158813 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 20:12:28.312848 systemd-networkd[1373]: enP14012s1: Gained IPv6LL Feb 13 20:12:28.816542 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:12:28.818383 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:12:28.880671 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 20:12:29.105681 kernel: loop3: detected capacity change from 0 to 28720 Feb 13 20:12:29.677670 kernel: loop4: detected capacity change from 0 to 116808 Feb 13 20:12:29.688678 kernel: loop5: detected capacity change from 0 to 113536 Feb 13 20:12:29.700661 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 20:12:29.710650 kernel: loop7: detected capacity change from 0 to 28720 Feb 13 20:12:29.713443 (sd-merge)[1510]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 20:12:29.714677 (sd-merge)[1510]: Merged extensions into '/usr'. Feb 13 20:12:29.718080 systemd[1]: Reloading requested from client PID 1492 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:12:29.718100 systemd[1]: Reloading... Feb 13 20:12:29.780817 zram_generator::config[1534]: No configuration found. Feb 13 20:12:30.031195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:12:30.103998 systemd[1]: Reloading finished in 385 ms. Feb 13 20:12:30.121620 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:12:30.134800 systemd[1]: Starting ensure-sysext.service... Feb 13 20:12:30.141958 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:12:30.158310 systemd[1]: Reloading requested from client PID 1598 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:12:30.158332 systemd[1]: Reloading... Feb 13 20:12:30.169898 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:12:30.171349 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:12:30.172096 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:12:30.172322 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Feb 13 20:12:30.172373 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Feb 13 20:12:30.222780 zram_generator::config[1628]: No configuration found. Feb 13 20:12:30.335557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:12:30.357131 systemd-tmpfiles[1599]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:12:30.357141 systemd-tmpfiles[1599]: Skipping /boot Feb 13 20:12:30.364129 systemd-tmpfiles[1599]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:12:30.364257 systemd-tmpfiles[1599]: Skipping /boot Feb 13 20:12:30.407759 systemd[1]: Reloading finished in 249 ms. Feb 13 20:12:30.422824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:12:30.438919 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:12:30.459863 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:12:30.468805 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:12:30.477883 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:12:30.494833 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:12:30.506263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:12:30.509826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:12:30.525566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:12:30.547968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:12:30.555891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:12:30.556851 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:12:30.566809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:12:30.566985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:12:30.574121 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:12:30.574288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:12:30.582412 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:12:30.586056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:12:30.598972 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:12:30.609071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:12:30.613915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:12:30.622973 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:12:30.642036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:12:30.648006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:12:30.648887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:12:30.649065 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:12:30.656118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:12:30.656280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:12:30.664917 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:12:30.665219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:12:30.677383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:12:30.684056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:12:30.692040 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:12:30.699916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:12:30.709878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:12:30.716351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:12:30.716443 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:12:30.725903 systemd[1]: Finished ensure-sysext.service. Feb 13 20:12:30.733119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:12:30.733366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:12:30.743131 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:12:30.743355 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:12:30.752749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:12:30.752989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:12:30.763414 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:12:30.763629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:12:30.773844 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:12:30.773948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:12:31.006049 systemd-resolved[1696]: Positive Trust Anchors: Feb 13 20:12:31.006071 systemd-resolved[1696]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:12:31.006102 systemd-resolved[1696]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:12:31.100879 systemd-resolved[1696]: Using system hostname 'ci-4152.2.1-a-1780829b1e'. Feb 13 20:12:31.102858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:12:31.109471 systemd[1]: Reached target network.target - Network. Feb 13 20:12:31.114647 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:12:31.120596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:12:31.361243 augenrules[1756]: No rules Feb 13 20:12:31.363059 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:12:31.363329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:12:31.906042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:12:31.913759 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:12:36.008147 ldconfig[1486]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:12:36.018563 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:12:36.030911 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:12:36.045328 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:12:36.052049 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:12:36.058236 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:12:36.065591 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:12:36.072737 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:12:36.078740 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:12:36.085708 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:12:36.092598 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:12:36.092626 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:12:36.097574 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:12:36.104704 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:12:36.113068 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:12:36.302202 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:12:36.308541 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:12:36.314692 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:12:36.319823 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:12:36.325165 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:12:36.325205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:12:36.325225 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:12:36.334744 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 20:12:36.342791 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:12:36.358902 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:12:36.368883 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:12:36.373573 (chronyd)[1772]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 20:12:36.383851 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:12:36.386086 chronyd[1782]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 20:12:36.397040 jq[1780]: false Feb 13 20:12:36.402271 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:12:36.408197 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:12:36.408239 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 20:12:36.409617 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 20:12:36.416407 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 20:12:36.418200 KVP[1785]: KVP starting; pid is:1785 Feb 13 20:12:36.418535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:12:36.429890 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:12:36.440892 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:12:36.449792 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:12:36.457801 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:12:36.465002 KVP[1785]: KVP LIC Version: 3.1 Feb 13 20:12:36.469563 kernel: hv_utils: KVP IC version 4.0 Feb 13 20:12:36.470878 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:12:36.479827 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:12:36.491244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:12:36.502819 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:12:36.520765 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:12:36.523880 jq[1802]: true Feb 13 20:12:36.535525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:12:36.535842 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:12:36.546481 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:12:36.548507 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:12:36.570677 jq[1807]: true Feb 13 20:12:36.707607 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:12:36.744120 chronyd[1782]: Timezone right/UTC failed leap second check, ignoring Feb 13 20:12:36.744386 chronyd[1782]: Loaded seccomp filter (level 2) Feb 13 20:12:36.746095 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 20:12:36.767568 extend-filesystems[1783]: Found loop4 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found loop5 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found loop6 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found loop7 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda1 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda2 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda3 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found usr Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda4 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda6 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda7 Feb 13 20:12:36.767568 extend-filesystems[1783]: Found sda9 Feb 13 20:12:36.767568 extend-filesystems[1783]: Checking size of /dev/sda9 Feb 13 20:12:36.765482 systemd-logind[1798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 13 20:12:37.250737 tar[1806]: linux-arm64/helm Feb 13 20:12:37.250919 coreos-metadata[1775]: Feb 13 20:12:37.226 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:12:37.250919 coreos-metadata[1775]: Feb 13 20:12:37.236 INFO Fetch successful Feb 13 20:12:37.250919 coreos-metadata[1775]: Feb 13 20:12:37.236 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 20:12:37.250919 coreos-metadata[1775]: Feb 13 20:12:37.241 INFO Fetch successful Feb 13 20:12:37.250919 coreos-metadata[1775]: Feb 13 20:12:37.241 INFO Fetching http://168.63.129.16/machine/875c9aaa-f45c-4fd0-a365-0567e22c8657/5ed028db%2Da350%2D473b%2Da481%2Dc10e0dac085f.%5Fci%2D4152.2.1%2Da%2D1780829b1e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 20:12:37.250919 coreos-metadata[1775]: Feb 13 20:12:37.243 INFO Fetch successful Feb 13 20:12:37.250919 coreos-metadata[1775]: Feb 13 20:12:37.243 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:12:36.780269 dbus-daemon[1776]: [system] SELinux support is enabled Feb 13 20:12:37.251397 update_engine[1801]: I20250213 20:12:36.834184 1801 main.cc:92] Flatcar Update Engine starting Feb 13 20:12:37.251397 update_engine[1801]: I20250213 20:12:36.835398 1801 update_check_scheduler.cc:74] Next update check in 7m3s Feb 13 20:12:37.251593 extend-filesystems[1783]: Old size kept for /dev/sda9 Feb 13 20:12:37.251593 extend-filesystems[1783]: Found sr0 Feb 13 20:12:36.766216 systemd-logind[1798]: New seat seat0. Feb 13 20:12:36.797488 dbus-daemon[1776]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:12:37.267051 coreos-metadata[1775]: Feb 13 20:12:37.260 INFO Fetch successful Feb 13 20:12:36.766854 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:12:36.780492 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:12:36.796962 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:12:36.796989 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:12:36.816153 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:12:36.816173 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:12:36.833081 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:12:36.833315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:12:36.839072 (ntainerd)[1857]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:12:36.841929 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:12:36.848587 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:12:36.858002 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:12:37.182029 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:12:37.182267 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:12:37.308101 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:12:37.329067 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:12:37.371031 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1848) Feb 13 20:12:37.799816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:37.811482 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:12:37.904039 tar[1806]: linux-arm64/LICENSE Feb 13 20:12:37.904154 tar[1806]: linux-arm64/README.md Feb 13 20:12:37.916189 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:12:38.002242 locksmithd[1861]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:12:38.266417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:12:38.403001 kubelet[1933]: E0213 20:12:38.264380 1933 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:12:38.266578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:12:38.409508 bash[1830]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:12:38.412428 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:12:38.415609 sshd_keygen[1832]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:12:38.423213 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:12:38.440347 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:12:38.455672 containerd[1857]: time="2025-02-13T20:12:38.454924920Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 20:12:38.455761 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:12:38.464879 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 20:12:38.478471 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:12:38.478748 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:12:38.493572 containerd[1857]: time="2025-02-13T20:12:38.493522600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:38.495953 containerd[1857]: time="2025-02-13T20:12:38.495897680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:38.495953 containerd[1857]: time="2025-02-13T20:12:38.495942360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:12:38.496044 containerd[1857]: time="2025-02-13T20:12:38.495963600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:12:38.496182 containerd[1857]: time="2025-02-13T20:12:38.496151880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:12:38.496182 containerd[1857]: time="2025-02-13T20:12:38.496175960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:38.496276 containerd[1857]: time="2025-02-13T20:12:38.496239400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:38.496276 containerd[1857]: time="2025-02-13T20:12:38.496269240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:38.497700 containerd[1857]: time="2025-02-13T20:12:38.496480240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:38.497700 containerd[1857]: time="2025-02-13T20:12:38.496501280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:38.497700 containerd[1857]: time="2025-02-13T20:12:38.496514840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:38.497700 containerd[1857]: time="2025-02-13T20:12:38.496524360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:38.497700 containerd[1857]: time="2025-02-13T20:12:38.496594440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:38.498015 containerd[1857]: time="2025-02-13T20:12:38.497976880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:38.498183 containerd[1857]: time="2025-02-13T20:12:38.498157640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:38.498183 containerd[1857]: time="2025-02-13T20:12:38.498180120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:12:38.498275 containerd[1857]: time="2025-02-13T20:12:38.498255720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:12:38.498325 containerd[1857]: time="2025-02-13T20:12:38.498307560Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:12:38.505968 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:12:38.514933 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 20:12:38.531298 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:12:38.547974 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:12:38.555190 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:12:38.562287 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:12:38.955800 containerd[1857]: time="2025-02-13T20:12:38.955742080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:12:38.956029 containerd[1857]: time="2025-02-13T20:12:38.955840960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:12:38.956029 containerd[1857]: time="2025-02-13T20:12:38.955860520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:12:38.956029 containerd[1857]: time="2025-02-13T20:12:38.955885160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:12:38.956029 containerd[1857]: time="2025-02-13T20:12:38.955910960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:12:38.956170 containerd[1857]: time="2025-02-13T20:12:38.956139000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:12:38.956586 containerd[1857]: time="2025-02-13T20:12:38.956546200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:12:38.956758 containerd[1857]: time="2025-02-13T20:12:38.956721440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:12:38.956858 containerd[1857]: time="2025-02-13T20:12:38.956759080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:12:38.956858 containerd[1857]: time="2025-02-13T20:12:38.956783920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:12:38.956858 containerd[1857]: time="2025-02-13T20:12:38.956804040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.956858 containerd[1857]: time="2025-02-13T20:12:38.956816800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.956858 containerd[1857]: time="2025-02-13T20:12:38.956837760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.956858 containerd[1857]: time="2025-02-13T20:12:38.956857800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.957008 containerd[1857]: time="2025-02-13T20:12:38.956871600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.957008 containerd[1857]: time="2025-02-13T20:12:38.956893720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.957008 containerd[1857]: time="2025-02-13T20:12:38.956910280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.957008 containerd[1857]: time="2025-02-13T20:12:38.956924520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:12:38.957008 containerd[1857]: time="2025-02-13T20:12:38.956951840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957008 containerd[1857]: time="2025-02-13T20:12:38.956973440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957008 containerd[1857]: time="2025-02-13T20:12:38.956999920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957014360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957033400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957053440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957064560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957085280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957105080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957119800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957145400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957167760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957189 containerd[1857]: time="2025-02-13T20:12:38.957180840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957201120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957231080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957252080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957267720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957349120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957370960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957388680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957406040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957418720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957437520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957446800Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:12:38.957589 containerd[1857]: time="2025-02-13T20:12:38.957460080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:12:38.958015 containerd[1857]: time="2025-02-13T20:12:38.957813160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:12:38.958015 containerd[1857]: time="2025-02-13T20:12:38.957877160Z" level=info msg="Connect containerd service" Feb 13 20:12:38.958015 containerd[1857]: time="2025-02-13T20:12:38.957919320Z" level=info msg="using legacy CRI server" Feb 13 20:12:38.958015 containerd[1857]: time="2025-02-13T20:12:38.957926160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:12:38.958289 containerd[1857]: time="2025-02-13T20:12:38.958095200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.959475160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.959862200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.959908200Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.959966600Z" level=info msg="Start subscribing containerd event" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.960009760Z" level=info msg="Start recovering state" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.960085280Z" level=info msg="Start event monitor" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.960102160Z" level=info msg="Start snapshots syncer" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.960110800Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.960127960Z" level=info msg="Start streaming server" Feb 13 20:12:38.970674 containerd[1857]: time="2025-02-13T20:12:38.960195960Z" level=info msg="containerd successfully booted in 0.506095s" Feb 13 20:12:38.960854 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:12:38.971399 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:12:38.984712 systemd[1]: Startup finished in 15.017s (kernel) + 21.221s (userspace) = 36.239s. Feb 13 20:12:39.721743 login[1989]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:39.722884 login[1990]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:39.733527 systemd-logind[1798]: New session 2 of user core. Feb 13 20:12:39.735025 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:12:39.744879 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:12:39.749863 systemd-logind[1798]: New session 1 of user core. Feb 13 20:12:39.759467 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:12:39.769930 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:12:39.772524 (systemd)[2000]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:12:40.150467 systemd[2000]: Queued start job for default target default.target. Feb 13 20:12:40.150866 systemd[2000]: Created slice app.slice - User Application Slice. Feb 13 20:12:40.150893 systemd[2000]: Reached target paths.target - Paths. Feb 13 20:12:40.150905 systemd[2000]: Reached target timers.target - Timers. Feb 13 20:12:40.155740 systemd[2000]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:12:40.162735 systemd[2000]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:12:40.162795 systemd[2000]: Reached target sockets.target - Sockets. Feb 13 20:12:40.162808 systemd[2000]: Reached target basic.target - Basic System. Feb 13 20:12:40.162848 systemd[2000]: Reached target default.target - Main User Target. Feb 13 20:12:40.162873 systemd[2000]: Startup finished in 384ms. Feb 13 20:12:40.163189 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:12:40.166948 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:12:40.167891 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:12:41.921598 waagent[1985]: 2025-02-13T20:12:41.921493Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 20:12:41.927390 waagent[1985]: 2025-02-13T20:12:41.927309Z INFO Daemon Daemon OS: flatcar 4152.2.1 Feb 13 20:12:41.932035 waagent[1985]: 2025-02-13T20:12:41.931975Z INFO Daemon Daemon Python: 3.11.10 Feb 13 20:12:41.937802 waagent[1985]: 2025-02-13T20:12:41.937721Z INFO Daemon Daemon Run daemon Feb 13 20:12:41.943662 waagent[1985]: 2025-02-13T20:12:41.942990Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.1' Feb 13 20:12:41.952384 waagent[1985]: 2025-02-13T20:12:41.952311Z INFO Daemon Daemon Using waagent for provisioning Feb 13 20:12:41.957875 waagent[1985]: 2025-02-13T20:12:41.957825Z INFO Daemon Daemon Activate resource disk Feb 13 20:12:41.962654 waagent[1985]: 2025-02-13T20:12:41.962578Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 20:12:41.975737 waagent[1985]: 2025-02-13T20:12:41.975668Z INFO Daemon Daemon Found device: None Feb 13 20:12:41.980585 waagent[1985]: 2025-02-13T20:12:41.980526Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 20:12:41.989381 waagent[1985]: 2025-02-13T20:12:41.989319Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 20:12:42.001243 waagent[1985]: 2025-02-13T20:12:42.001192Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:12:42.007256 waagent[1985]: 2025-02-13T20:12:42.007192Z INFO Daemon Daemon Running default provisioning handler Feb 13 20:12:42.020144 waagent[1985]: 2025-02-13T20:12:42.019519Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 20:12:42.034358 waagent[1985]: 2025-02-13T20:12:42.034287Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 20:12:42.045353 waagent[1985]: 2025-02-13T20:12:42.045288Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 20:12:42.050722 waagent[1985]: 2025-02-13T20:12:42.050662Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 20:12:42.315719 waagent[1985]: 2025-02-13T20:12:42.313406Z INFO Daemon Daemon Successfully mounted dvd Feb 13 20:12:42.327446 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 20:12:42.331677 waagent[1985]: 2025-02-13T20:12:42.331273Z INFO Daemon Daemon Detect protocol endpoint Feb 13 20:12:42.336286 waagent[1985]: 2025-02-13T20:12:42.336228Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:12:42.341943 waagent[1985]: 2025-02-13T20:12:42.341889Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 20:12:42.348658 waagent[1985]: 2025-02-13T20:12:42.348595Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 20:12:42.354026 waagent[1985]: 2025-02-13T20:12:42.353976Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 20:12:42.359079 waagent[1985]: 2025-02-13T20:12:42.359030Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 20:12:42.505036 waagent[1985]: 2025-02-13T20:12:42.504974Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 20:12:42.511715 waagent[1985]: 2025-02-13T20:12:42.511668Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 20:12:42.516946 waagent[1985]: 2025-02-13T20:12:42.516883Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 20:12:42.756729 waagent[1985]: 2025-02-13T20:12:42.756250Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 20:12:42.763354 waagent[1985]: 2025-02-13T20:12:42.763277Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 20:12:42.772866 waagent[1985]: 2025-02-13T20:12:42.772809Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:12:42.837644 waagent[1985]: 2025-02-13T20:12:42.837590Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 20:12:42.843711 waagent[1985]: 2025-02-13T20:12:42.843632Z INFO Daemon Feb 13 20:12:42.846552 waagent[1985]: 2025-02-13T20:12:42.846501Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e3fbb72c-a3a8-446f-822e-59c23bee9674 eTag: 11784569618011583784 source: Fabric] Feb 13 20:12:42.858323 waagent[1985]: 2025-02-13T20:12:42.858269Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 20:12:42.865339 waagent[1985]: 2025-02-13T20:12:42.865287Z INFO Daemon Feb 13 20:12:42.868228 waagent[1985]: 2025-02-13T20:12:42.868175Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:12:42.879322 waagent[1985]: 2025-02-13T20:12:42.879281Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 20:12:42.971772 waagent[1985]: 2025-02-13T20:12:42.971679Z INFO Daemon Downloaded certificate {'thumbprint': 'E9126FF9FA4DD48EA39CACA13E75A6FD2C65D416', 'hasPrivateKey': False} Feb 13 20:12:42.982022 waagent[1985]: 2025-02-13T20:12:42.981969Z INFO Daemon Downloaded certificate {'thumbprint': '5A0FF6780ED9572C34596923A814944BC1419175', 'hasPrivateKey': True} Feb 13 20:12:42.992537 waagent[1985]: 2025-02-13T20:12:42.992485Z INFO Daemon Fetch goal state completed Feb 13 20:12:43.004078 waagent[1985]: 2025-02-13T20:12:43.004033Z INFO Daemon Daemon Starting provisioning Feb 13 20:12:43.009482 waagent[1985]: 2025-02-13T20:12:43.009385Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 20:12:43.014115 waagent[1985]: 2025-02-13T20:12:43.014064Z INFO Daemon Daemon Set hostname [ci-4152.2.1-a-1780829b1e] Feb 13 20:12:43.025661 waagent[1985]: 2025-02-13T20:12:43.024837Z INFO Daemon Daemon Publish hostname [ci-4152.2.1-a-1780829b1e] Feb 13 20:12:43.031341 waagent[1985]: 2025-02-13T20:12:43.031277Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 20:12:43.037792 waagent[1985]: 2025-02-13T20:12:43.037736Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 20:12:43.057727 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:12:43.057734 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:12:43.057763 systemd-networkd[1373]: eth0: DHCP lease lost Feb 13 20:12:43.064269 waagent[1985]: 2025-02-13T20:12:43.058872Z INFO Daemon Daemon Create user account if not exists Feb 13 20:12:43.064767 waagent[1985]: 2025-02-13T20:12:43.064705Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 20:12:43.070620 systemd-networkd[1373]: eth0: DHCPv6 lease lost Feb 13 20:12:43.071358 waagent[1985]: 2025-02-13T20:12:43.071281Z INFO Daemon Daemon Configure sudoer Feb 13 20:12:43.076400 waagent[1985]: 2025-02-13T20:12:43.076329Z INFO Daemon Daemon Configure sshd Feb 13 20:12:43.081779 waagent[1985]: 2025-02-13T20:12:43.081712Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 20:12:43.095535 waagent[1985]: 2025-02-13T20:12:43.095465Z INFO Daemon Daemon Deploy ssh public key. Feb 13 20:12:43.105934 systemd-networkd[1373]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:12:44.353811 waagent[1985]: 2025-02-13T20:12:44.353761Z INFO Daemon Daemon Provisioning complete Feb 13 20:12:44.377942 waagent[1985]: 2025-02-13T20:12:44.377891Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 20:12:44.384439 waagent[1985]: 2025-02-13T20:12:44.384375Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 20:12:44.394459 waagent[1985]: 2025-02-13T20:12:44.394397Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 20:12:44.529357 waagent[2060]: 2025-02-13T20:12:44.528834Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 20:12:44.529357 waagent[2060]: 2025-02-13T20:12:44.528988Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.1 Feb 13 20:12:44.529357 waagent[2060]: 2025-02-13T20:12:44.529040Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 20:12:44.652381 waagent[2060]: 2025-02-13T20:12:44.652245Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 20:12:44.652725 waagent[2060]: 2025-02-13T20:12:44.652684Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:12:44.652869 waagent[2060]: 2025-02-13T20:12:44.652836Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:12:44.661248 waagent[2060]: 2025-02-13T20:12:44.661176Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:12:44.666934 waagent[2060]: 2025-02-13T20:12:44.666887Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 20:12:44.668658 waagent[2060]: 2025-02-13T20:12:44.667545Z INFO ExtHandler Feb 13 20:12:44.668658 waagent[2060]: 2025-02-13T20:12:44.667620Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a0b78175-469a-4d62-b86b-73116eab52d2 eTag: 11784569618011583784 source: Fabric] Feb 13 20:12:44.668658 waagent[2060]: 2025-02-13T20:12:44.667930Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 20:12:44.668658 waagent[2060]: 2025-02-13T20:12:44.668456Z INFO ExtHandler Feb 13 20:12:44.668658 waagent[2060]: 2025-02-13T20:12:44.668524Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:12:44.672574 waagent[2060]: 2025-02-13T20:12:44.672539Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 20:12:44.756842 waagent[2060]: 2025-02-13T20:12:44.756763Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E9126FF9FA4DD48EA39CACA13E75A6FD2C65D416', 'hasPrivateKey': False} Feb 13 20:12:44.757423 waagent[2060]: 2025-02-13T20:12:44.757385Z INFO ExtHandler Downloaded certificate {'thumbprint': '5A0FF6780ED9572C34596923A814944BC1419175', 'hasPrivateKey': True} Feb 13 20:12:44.757947 waagent[2060]: 2025-02-13T20:12:44.757906Z INFO ExtHandler Fetch goal state completed Feb 13 20:12:44.775460 waagent[2060]: 2025-02-13T20:12:44.775402Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2060 Feb 13 20:12:44.775763 waagent[2060]: 2025-02-13T20:12:44.775726Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 20:12:44.777496 waagent[2060]: 2025-02-13T20:12:44.777454Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 20:12:44.777981 waagent[2060]: 2025-02-13T20:12:44.777942Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 20:12:44.968693 waagent[2060]: 2025-02-13T20:12:44.968586Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 20:12:44.969023 waagent[2060]: 2025-02-13T20:12:44.968984Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 20:12:44.975094 waagent[2060]: 2025-02-13T20:12:44.975060Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 20:12:44.982101 systemd[1]: Reloading requested from client PID 2075 ('systemctl') (unit waagent.service)... Feb 13 20:12:44.982118 systemd[1]: Reloading... Feb 13 20:12:45.063670 zram_generator::config[2112]: No configuration found. Feb 13 20:12:45.168064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:12:45.244378 systemd[1]: Reloading finished in 261 ms. Feb 13 20:12:45.261936 waagent[2060]: 2025-02-13T20:12:45.261842Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 20:12:45.268534 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit waagent.service)... Feb 13 20:12:45.268703 systemd[1]: Reloading... Feb 13 20:12:45.341682 zram_generator::config[2202]: No configuration found. Feb 13 20:12:45.457659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:12:45.534003 systemd[1]: Reloading finished in 264 ms. Feb 13 20:12:45.560694 waagent[2060]: 2025-02-13T20:12:45.559901Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 20:12:45.560694 waagent[2060]: 2025-02-13T20:12:45.560078Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 20:12:45.699486 waagent[2060]: 2025-02-13T20:12:45.699398Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 20:12:45.700359 waagent[2060]: 2025-02-13T20:12:45.700303Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 20:12:45.701362 waagent[2060]: 2025-02-13T20:12:45.701297Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 20:12:45.701675 waagent[2060]: 2025-02-13T20:12:45.701549Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:12:45.702115 waagent[2060]: 2025-02-13T20:12:45.702051Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 20:12:45.702295 waagent[2060]: 2025-02-13T20:12:45.702202Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:12:45.702765 waagent[2060]: 2025-02-13T20:12:45.702713Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 20:12:45.702923 waagent[2060]: 2025-02-13T20:12:45.702883Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:12:45.703195 waagent[2060]: 2025-02-13T20:12:45.703143Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 20:12:45.703408 waagent[2060]: 2025-02-13T20:12:45.703363Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 20:12:45.703408 waagent[2060]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 20:12:45.703408 waagent[2060]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 20:12:45.703408 waagent[2060]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 20:12:45.703408 waagent[2060]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:12:45.703408 waagent[2060]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:12:45.703408 waagent[2060]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:12:45.703846 waagent[2060]: 2025-02-13T20:12:45.703766Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 20:12:45.703955 waagent[2060]: 2025-02-13T20:12:45.703882Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:12:45.704846 waagent[2060]: 2025-02-13T20:12:45.704772Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 20:12:45.705138 waagent[2060]: 2025-02-13T20:12:45.704980Z INFO EnvHandler ExtHandler Configure routes Feb 13 20:12:45.705138 waagent[2060]: 2025-02-13T20:12:45.705048Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 20:12:45.705241 waagent[2060]: 2025-02-13T20:12:45.705200Z INFO EnvHandler ExtHandler Gateway:None Feb 13 20:12:45.705302 waagent[2060]: 2025-02-13T20:12:45.705273Z INFO EnvHandler ExtHandler Routes:None Feb 13 20:12:45.705850 waagent[2060]: 2025-02-13T20:12:45.705804Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 20:12:45.716090 waagent[2060]: 2025-02-13T20:12:45.715973Z INFO ExtHandler ExtHandler Feb 13 20:12:45.716306 waagent[2060]: 2025-02-13T20:12:45.716269Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a80f3cda-63f9-4839-a962-5de87147ce44 correlation 71aae3aa-0337-4830-9779-bd5bc3074819 created: 2025-02-13T20:10:56.768898Z] Feb 13 20:12:45.716821 waagent[2060]: 2025-02-13T20:12:45.716772Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 20:12:45.717915 waagent[2060]: 2025-02-13T20:12:45.717447Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 20:12:45.725339 waagent[2060]: 2025-02-13T20:12:45.725256Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 20:12:45.725339 waagent[2060]: Executing ['ip', '-a', '-o', 'link']: Feb 13 20:12:45.725339 waagent[2060]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 20:12:45.725339 waagent[2060]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:8f:0d brd ff:ff:ff:ff:ff:ff Feb 13 20:12:45.725339 waagent[2060]: 3: enP14012s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:8f:0d brd ff:ff:ff:ff:ff:ff\ altname enP14012p0s2 Feb 13 20:12:45.725339 waagent[2060]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 20:12:45.725339 waagent[2060]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 20:12:45.725339 waagent[2060]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 20:12:45.725339 waagent[2060]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 20:12:45.725339 waagent[2060]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 20:12:45.725339 waagent[2060]: 2: eth0 inet6 fe80::222:48ff:fe7c:8f0d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:12:45.725339 waagent[2060]: 3: enP14012s1 inet6 fe80::222:48ff:fe7c:8f0d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:12:46.020608 waagent[2060]: 2025-02-13T20:12:46.020406Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6B82C91E-7B20-4BAE-B12A-68E0967D77CD;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 20:12:46.968438 waagent[2060]: 2025-02-13T20:12:46.968348Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 20:12:46.968438 waagent[2060]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:12:46.968438 waagent[2060]: pkts bytes target prot opt in out source destination Feb 13 20:12:46.968438 waagent[2060]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:12:46.968438 waagent[2060]: pkts bytes target prot opt in out source destination Feb 13 20:12:46.968438 waagent[2060]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:12:46.968438 waagent[2060]: pkts bytes target prot opt in out source destination Feb 13 20:12:46.968438 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:12:46.968438 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:12:46.968438 waagent[2060]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:12:46.971545 waagent[2060]: 2025-02-13T20:12:46.971477Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 20:12:46.971545 waagent[2060]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:12:46.971545 waagent[2060]: pkts bytes target prot opt in out source destination Feb 13 20:12:46.971545 waagent[2060]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:12:46.971545 waagent[2060]: pkts bytes target prot opt in out source destination Feb 13 20:12:46.971545 waagent[2060]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:12:46.971545 waagent[2060]: pkts bytes target prot opt in out source destination Feb 13 20:12:46.971545 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:12:46.971545 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:12:46.971545 waagent[2060]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:12:46.971818 waagent[2060]: 2025-02-13T20:12:46.971778Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 20:12:48.294966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:12:48.307833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:12:48.405975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:48.410020 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:12:48.467691 kubelet[2307]: E0213 20:12:48.467610 2307 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:12:48.470176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:12:48.470307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:12:58.545092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:12:58.552839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:12:58.650824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:58.662171 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:12:58.704128 kubelet[2328]: E0213 20:12:58.704052 2328 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:12:58.707478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:12:58.707689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:13:00.546185 chronyd[1782]: Selected source PHC0 Feb 13 20:13:08.795078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:13:08.801846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:13:09.134894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:13:09.140069 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:13:09.180346 kubelet[2349]: E0213 20:13:09.180268 2349 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:13:09.183388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:13:09.183549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:13:13.932612 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 20:13:19.295000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:13:19.302855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:13:19.754916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:13:19.759429 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:13:19.801149 kubelet[2369]: E0213 20:13:19.801068 2369 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:13:19.804106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:13:19.804431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:13:21.785246 update_engine[1801]: I20250213 20:13:21.784618 1801 update_attempter.cc:509] Updating boot flags... Feb 13 20:13:22.201745 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2393) Feb 13 20:13:22.325885 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2383) Feb 13 20:13:30.044857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:13:30.051839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:13:30.342879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:13:30.345707 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:13:30.389099 kubelet[2504]: E0213 20:13:30.389026 2504 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:13:30.391036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:13:30.391186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:13:33.947105 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:13:33.958904 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:59430.service - OpenSSH per-connection server daemon (10.200.16.10:59430). Feb 13 20:13:34.462374 sshd[2513]: Accepted publickey for core from 10.200.16.10 port 59430 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:13:34.463684 sshd-session[2513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:34.468569 systemd-logind[1798]: New session 3 of user core. Feb 13 20:13:34.474015 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:13:34.874898 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:59442.service - OpenSSH per-connection server daemon (10.200.16.10:59442). Feb 13 20:13:35.316959 sshd[2518]: Accepted publickey for core from 10.200.16.10 port 59442 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:13:35.318287 sshd-session[2518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:35.322983 systemd-logind[1798]: New session 4 of user core. Feb 13 20:13:35.330981 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:13:35.667732 sshd[2521]: Connection closed by 10.200.16.10 port 59442 Feb 13 20:13:35.668261 sshd-session[2518]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:35.670573 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:59442.service: Deactivated successfully. Feb 13 20:13:35.674168 systemd-logind[1798]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:13:35.674802 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:13:35.675848 systemd-logind[1798]: Removed session 4. Feb 13 20:13:35.746883 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:59446.service - OpenSSH per-connection server daemon (10.200.16.10:59446). Feb 13 20:13:36.187440 sshd[2526]: Accepted publickey for core from 10.200.16.10 port 59446 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:13:36.188879 sshd-session[2526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:36.193969 systemd-logind[1798]: New session 5 of user core. Feb 13 20:13:36.202962 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:13:36.534745 sshd[2529]: Connection closed by 10.200.16.10 port 59446 Feb 13 20:13:36.535583 sshd-session[2526]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:36.539756 systemd-logind[1798]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:13:36.540319 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:59446.service: Deactivated successfully. Feb 13 20:13:36.542218 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:13:36.543530 systemd-logind[1798]: Removed session 5. Feb 13 20:13:36.619129 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:59448.service - OpenSSH per-connection server daemon (10.200.16.10:59448). Feb 13 20:13:37.060979 sshd[2534]: Accepted publickey for core from 10.200.16.10 port 59448 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:13:37.062258 sshd-session[2534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:37.066189 systemd-logind[1798]: New session 6 of user core. Feb 13 20:13:37.075009 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:13:37.402735 sshd[2537]: Connection closed by 10.200.16.10 port 59448 Feb 13 20:13:37.403491 sshd-session[2534]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:37.407003 systemd-logind[1798]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:13:37.408011 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:59448.service: Deactivated successfully. Feb 13 20:13:37.409494 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:13:37.411153 systemd-logind[1798]: Removed session 6. Feb 13 20:13:37.489893 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:59456.service - OpenSSH per-connection server daemon (10.200.16.10:59456). Feb 13 20:13:37.974178 sshd[2542]: Accepted publickey for core from 10.200.16.10 port 59456 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:13:37.975806 sshd-session[2542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:37.981663 systemd-logind[1798]: New session 7 of user core. Feb 13 20:13:37.988959 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:13:38.284826 sudo[2546]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:13:38.285093 sudo[2546]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:13:38.304533 sudo[2546]: pam_unix(sudo:session): session closed for user root Feb 13 20:13:38.375678 sshd[2545]: Connection closed by 10.200.16.10 port 59456 Feb 13 20:13:38.376499 sshd-session[2542]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:38.379370 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:59456.service: Deactivated successfully. Feb 13 20:13:38.383026 systemd-logind[1798]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:13:38.384038 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:13:38.385062 systemd-logind[1798]: Removed session 7. Feb 13 20:13:38.457891 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:59466.service - OpenSSH per-connection server daemon (10.200.16.10:59466). Feb 13 20:13:38.938990 sshd[2551]: Accepted publickey for core from 10.200.16.10 port 59466 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:13:38.940315 sshd-session[2551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:38.945240 systemd-logind[1798]: New session 8 of user core. Feb 13 20:13:38.951913 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:13:39.210327 sudo[2556]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:13:39.211035 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:13:39.214424 sudo[2556]: pam_unix(sudo:session): session closed for user root Feb 13 20:13:39.219175 sudo[2555]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 20:13:39.219436 sudo[2555]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:13:39.233941 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:13:39.257517 augenrules[2578]: No rules Feb 13 20:13:39.258583 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:13:39.258938 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:13:39.261458 sudo[2555]: pam_unix(sudo:session): session closed for user root Feb 13 20:13:39.331670 sshd[2554]: Connection closed by 10.200.16.10 port 59466 Feb 13 20:13:39.333483 sshd-session[2551]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:39.336987 systemd-logind[1798]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:13:39.338083 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:59466.service: Deactivated successfully. Feb 13 20:13:39.340600 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:13:39.342045 systemd-logind[1798]: Removed session 8. Feb 13 20:13:39.421888 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:42050.service - OpenSSH per-connection server daemon (10.200.16.10:42050). Feb 13 20:13:39.905244 sshd[2587]: Accepted publickey for core from 10.200.16.10 port 42050 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:13:39.906483 sshd-session[2587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:39.911418 systemd-logind[1798]: New session 9 of user core. Feb 13 20:13:39.916052 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:13:40.179068 sudo[2591]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:13:40.179334 sudo[2591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:13:40.527914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 20:13:40.535944 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:13:40.536423 (dockerd)[2610]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:13:40.537869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:13:40.824888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:13:40.839976 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:13:40.882780 kubelet[2623]: E0213 20:13:40.882709 2623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:13:40.885836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:13:40.885992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:13:41.200715 dockerd[2610]: time="2025-02-13T20:13:41.200572572Z" level=info msg="Starting up" Feb 13 20:13:41.449122 dockerd[2610]: time="2025-02-13T20:13:41.449067117Z" level=info msg="Loading containers: start." Feb 13 20:13:41.649821 kernel: Initializing XFRM netlink socket Feb 13 20:13:41.727125 systemd-networkd[1373]: docker0: Link UP Feb 13 20:13:41.758290 dockerd[2610]: time="2025-02-13T20:13:41.758229162Z" level=info msg="Loading containers: done." Feb 13 20:13:41.773964 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck617462025-merged.mount: Deactivated successfully. Feb 13 20:13:41.781906 dockerd[2610]: time="2025-02-13T20:13:41.781313941Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:13:41.781906 dockerd[2610]: time="2025-02-13T20:13:41.781441542Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 20:13:41.781906 dockerd[2610]: time="2025-02-13T20:13:41.781615503Z" level=info msg="Daemon has completed initialization" Feb 13 20:13:41.854399 dockerd[2610]: time="2025-02-13T20:13:41.854338935Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:13:41.854631 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:13:43.558993 containerd[1857]: time="2025-02-13T20:13:43.558657682Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:13:44.449207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185960374.mount: Deactivated successfully. Feb 13 20:13:46.374687 containerd[1857]: time="2025-02-13T20:13:46.374381514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:46.377776 containerd[1857]: time="2025-02-13T20:13:46.377510008Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 20:13:46.383384 containerd[1857]: time="2025-02-13T20:13:46.383307832Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:46.392702 containerd[1857]: time="2025-02-13T20:13:46.392633512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:46.394075 containerd[1857]: time="2025-02-13T20:13:46.393912158Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.835210276s" Feb 13 20:13:46.394075 containerd[1857]: time="2025-02-13T20:13:46.393951518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:13:46.413499 containerd[1857]: time="2025-02-13T20:13:46.413441762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:13:49.007519 containerd[1857]: time="2025-02-13T20:13:49.007469346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:49.010735 containerd[1857]: time="2025-02-13T20:13:49.010664789Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 20:13:49.017150 containerd[1857]: time="2025-02-13T20:13:49.017107316Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:49.025470 containerd[1857]: time="2025-02-13T20:13:49.025403891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:49.026548 containerd[1857]: time="2025-02-13T20:13:49.026431478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.612952316s" Feb 13 20:13:49.026548 containerd[1857]: time="2025-02-13T20:13:49.026466199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:13:49.046128 containerd[1857]: time="2025-02-13T20:13:49.046090948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:13:50.826554 containerd[1857]: time="2025-02-13T20:13:50.826495867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:50.833067 containerd[1857]: time="2025-02-13T20:13:50.832799774Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 20:13:50.843195 containerd[1857]: time="2025-02-13T20:13:50.843136860Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:50.848834 containerd[1857]: time="2025-02-13T20:13:50.848779085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:50.849952 containerd[1857]: time="2025-02-13T20:13:50.849808449Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.8036795s" Feb 13 20:13:50.849952 containerd[1857]: time="2025-02-13T20:13:50.849848369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:13:50.869286 containerd[1857]: time="2025-02-13T20:13:50.869246935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:13:51.044825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 20:13:51.051845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:13:51.150821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:13:51.151112 (kubelet)[2910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:13:51.447103 kubelet[2910]: E0213 20:13:51.189745 2910 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:13:51.191569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:13:51.191724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:13:52.295356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191343348.mount: Deactivated successfully. Feb 13 20:13:52.984363 containerd[1857]: time="2025-02-13T20:13:52.984295691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:52.988570 containerd[1857]: time="2025-02-13T20:13:52.988521029Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 20:13:52.992492 containerd[1857]: time="2025-02-13T20:13:52.992455407Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:52.998201 containerd[1857]: time="2025-02-13T20:13:52.998134752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:52.999199 containerd[1857]: time="2025-02-13T20:13:52.998780154Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 2.129493419s" Feb 13 20:13:52.999199 containerd[1857]: time="2025-02-13T20:13:52.998817035Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:13:53.019360 containerd[1857]: time="2025-02-13T20:13:53.019250925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:13:53.752924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483584695.mount: Deactivated successfully. Feb 13 20:13:54.846697 containerd[1857]: time="2025-02-13T20:13:54.846107331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:54.852260 containerd[1857]: time="2025-02-13T20:13:54.852002637Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 20:13:54.856657 containerd[1857]: time="2025-02-13T20:13:54.856566097Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:54.863601 containerd[1857]: time="2025-02-13T20:13:54.863531768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:54.864911 containerd[1857]: time="2025-02-13T20:13:54.864792773Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.845503728s" Feb 13 20:13:54.864911 containerd[1857]: time="2025-02-13T20:13:54.864826334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:13:54.887093 containerd[1857]: time="2025-02-13T20:13:54.887048471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:13:55.603988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277344907.mount: Deactivated successfully. Feb 13 20:13:55.638681 containerd[1857]: time="2025-02-13T20:13:55.638049779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:55.642702 containerd[1857]: time="2025-02-13T20:13:55.642603199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 20:13:55.648844 containerd[1857]: time="2025-02-13T20:13:55.648785346Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:55.655559 containerd[1857]: time="2025-02-13T20:13:55.655493856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:55.656465 containerd[1857]: time="2025-02-13T20:13:55.656335100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 769.242788ms" Feb 13 20:13:55.656465 containerd[1857]: time="2025-02-13T20:13:55.656372260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:13:55.676994 containerd[1857]: time="2025-02-13T20:13:55.676785510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:13:56.392827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157924152.mount: Deactivated successfully. Feb 13 20:14:00.774697 containerd[1857]: time="2025-02-13T20:14:00.774352430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:00.777473 containerd[1857]: time="2025-02-13T20:14:00.777090762Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 20:14:00.781608 containerd[1857]: time="2025-02-13T20:14:00.781536581Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:00.788495 containerd[1857]: time="2025-02-13T20:14:00.788432971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:00.790038 containerd[1857]: time="2025-02-13T20:14:00.789575456Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.112750106s" Feb 13 20:14:00.790038 containerd[1857]: time="2025-02-13T20:14:00.789615897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:14:01.294941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 20:14:01.300833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:01.415936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:01.422484 (kubelet)[3062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:14:01.464629 kubelet[3062]: E0213 20:14:01.464534 3062 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:14:01.468882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:14:01.469050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:14:06.174365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:06.180887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:06.209231 systemd[1]: Reloading requested from client PID 3121 ('systemctl') (unit session-9.scope)... Feb 13 20:14:06.209253 systemd[1]: Reloading... Feb 13 20:14:06.295665 zram_generator::config[3164]: No configuration found. Feb 13 20:14:06.428223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:06.512182 systemd[1]: Reloading finished in 302 ms. Feb 13 20:14:06.560381 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:14:06.560464 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:14:06.560827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:06.564966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:06.672917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:06.682004 (kubelet)[3240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:14:06.725507 kubelet[3240]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:06.726683 kubelet[3240]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:14:06.726683 kubelet[3240]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:06.726683 kubelet[3240]: I0213 20:14:06.726017 3240 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:14:08.011010 kubelet[3240]: I0213 20:14:08.010957 3240 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:14:08.012664 kubelet[3240]: I0213 20:14:08.011515 3240 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:14:08.012664 kubelet[3240]: I0213 20:14:08.011825 3240 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:14:08.029434 kubelet[3240]: I0213 20:14:08.029380 3240 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:14:08.029747 kubelet[3240]: E0213 20:14:08.029384 3240 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.039007 kubelet[3240]: I0213 20:14:08.038976 3240 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:14:08.039385 kubelet[3240]: I0213 20:14:08.039354 3240 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:14:08.039598 kubelet[3240]: I0213 20:14:08.039387 3240 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.1-a-1780829b1e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:14:08.039702 kubelet[3240]: I0213 20:14:08.039602 3240 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:14:08.039702 kubelet[3240]: I0213 20:14:08.039612 3240 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:14:08.039772 kubelet[3240]: I0213 20:14:08.039756 3240 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:08.040526 kubelet[3240]: I0213 20:14:08.040508 3240 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:14:08.040565 kubelet[3240]: I0213 20:14:08.040532 3240 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:14:08.040589 kubelet[3240]: I0213 20:14:08.040565 3240 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:14:08.040589 kubelet[3240]: I0213 20:14:08.040578 3240 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:14:08.041674 kubelet[3240]: W0213 20:14:08.041510 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-1780829b1e&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.041674 kubelet[3240]: E0213 20:14:08.041564 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-1780829b1e&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.041674 kubelet[3240]: W0213 20:14:08.041612 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.041674 kubelet[3240]: E0213 20:14:08.041659 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.043322 kubelet[3240]: I0213 20:14:08.042169 3240 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:14:08.043322 kubelet[3240]: I0213 20:14:08.042330 3240 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:14:08.043322 kubelet[3240]: W0213 20:14:08.042370 3240 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:14:08.043322 kubelet[3240]: I0213 20:14:08.043104 3240 server.go:1264] "Started kubelet" Feb 13 20:14:08.047079 kubelet[3240]: I0213 20:14:08.047050 3240 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:14:08.048050 kubelet[3240]: E0213 20:14:08.047858 3240 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.1-a-1780829b1e.1823ddb46a56f7c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-1780829b1e,UID:ci-4152.2.1-a-1780829b1e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-1780829b1e,},FirstTimestamp:2025-02-13 20:14:08.043079621 +0000 UTC m=+1.357120409,LastTimestamp:2025-02-13 20:14:08.043079621 +0000 UTC m=+1.357120409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-1780829b1e,}" Feb 13 20:14:08.049811 kubelet[3240]: I0213 20:14:08.049753 3240 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:14:08.050899 kubelet[3240]: I0213 20:14:08.050860 3240 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:14:08.051882 kubelet[3240]: I0213 20:14:08.051804 3240 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:14:08.052110 kubelet[3240]: I0213 20:14:08.052078 3240 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:14:08.054484 kubelet[3240]: I0213 20:14:08.054453 3240 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:14:08.054622 kubelet[3240]: I0213 20:14:08.054601 3240 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:14:08.054762 kubelet[3240]: I0213 20:14:08.054716 3240 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:14:08.056111 kubelet[3240]: W0213 20:14:08.055290 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.056111 kubelet[3240]: E0213 20:14:08.055371 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.056516 kubelet[3240]: E0213 20:14:08.056458 3240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-1780829b1e?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Feb 13 20:14:08.060739 kubelet[3240]: E0213 20:14:08.060698 3240 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:14:08.061445 kubelet[3240]: I0213 20:14:08.061405 3240 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:14:08.061445 kubelet[3240]: I0213 20:14:08.061435 3240 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:14:08.061586 kubelet[3240]: I0213 20:14:08.061556 3240 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:14:08.072449 kubelet[3240]: I0213 20:14:08.072402 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:14:08.084679 kubelet[3240]: I0213 20:14:08.084314 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:14:08.084679 kubelet[3240]: I0213 20:14:08.084376 3240 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:14:08.084679 kubelet[3240]: I0213 20:14:08.084400 3240 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:14:08.084679 kubelet[3240]: E0213 20:14:08.084457 3240 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:14:08.094305 kubelet[3240]: W0213 20:14:08.094226 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.094305 kubelet[3240]: E0213 20:14:08.094304 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.133016 kubelet[3240]: I0213 20:14:08.132976 3240 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:14:08.133016 kubelet[3240]: I0213 20:14:08.132997 3240 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:14:08.133016 kubelet[3240]: I0213 20:14:08.133018 3240 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:08.138216 kubelet[3240]: I0213 20:14:08.138191 3240 policy_none.go:49] "None policy: Start" Feb 13 20:14:08.138998 kubelet[3240]: I0213 20:14:08.138940 3240 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:14:08.138998 kubelet[3240]: I0213 20:14:08.138972 3240 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:14:08.148132 kubelet[3240]: I0213 20:14:08.148096 3240 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:14:08.148346 kubelet[3240]: I0213 20:14:08.148302 3240 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:14:08.148419 kubelet[3240]: I0213 20:14:08.148406 3240 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:14:08.152649 kubelet[3240]: E0213 20:14:08.152614 3240 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.1-a-1780829b1e\" not found" Feb 13 20:14:08.156311 kubelet[3240]: I0213 20:14:08.156286 3240 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.156645 kubelet[3240]: E0213 20:14:08.156618 3240 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.185358 kubelet[3240]: I0213 20:14:08.184998 3240 topology_manager.go:215] "Topology Admit Handler" podUID="3120b038374837b248804fb843122239" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.187814 kubelet[3240]: I0213 20:14:08.187672 3240 topology_manager.go:215] "Topology Admit Handler" podUID="750a36f7b1c810856cd6ee65b9a5f8f7" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.189326 kubelet[3240]: I0213 20:14:08.189090 3240 topology_manager.go:215] "Topology Admit Handler" podUID="c18e23ab63eb500373637e4a6b6906f7" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.257112 kubelet[3240]: E0213 20:14:08.257067 3240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-1780829b1e?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Feb 13 20:14:08.356665 kubelet[3240]: I0213 20:14:08.356427 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c18e23ab63eb500373637e4a6b6906f7-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-1780829b1e\" (UID: \"c18e23ab63eb500373637e4a6b6906f7\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.356665 kubelet[3240]: I0213 20:14:08.356465 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3120b038374837b248804fb843122239-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-1780829b1e\" (UID: \"3120b038374837b248804fb843122239\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.356665 kubelet[3240]: I0213 20:14:08.356488 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3120b038374837b248804fb843122239-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-1780829b1e\" (UID: \"3120b038374837b248804fb843122239\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.356665 kubelet[3240]: I0213 20:14:08.356504 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3120b038374837b248804fb843122239-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-1780829b1e\" (UID: \"3120b038374837b248804fb843122239\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.356665 kubelet[3240]: I0213 20:14:08.356524 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.357268 kubelet[3240]: I0213 20:14:08.356542 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.357268 kubelet[3240]: I0213 20:14:08.356558 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.357268 kubelet[3240]: I0213 20:14:08.356576 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.357268 kubelet[3240]: I0213 20:14:08.356591 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.358437 kubelet[3240]: I0213 20:14:08.358132 3240 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.358547 kubelet[3240]: E0213 20:14:08.358462 3240 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.493756 containerd[1857]: time="2025-02-13T20:14:08.493684243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-1780829b1e,Uid:3120b038374837b248804fb843122239,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:08.494516 containerd[1857]: time="2025-02-13T20:14:08.494384326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-1780829b1e,Uid:750a36f7b1c810856cd6ee65b9a5f8f7,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:08.498067 containerd[1857]: time="2025-02-13T20:14:08.497919781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-1780829b1e,Uid:c18e23ab63eb500373637e4a6b6906f7,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:08.658182 kubelet[3240]: E0213 20:14:08.658014 3240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-1780829b1e?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Feb 13 20:14:08.761298 kubelet[3240]: I0213 20:14:08.761111 3240 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.761743 kubelet[3240]: E0213 20:14:08.761712 3240 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:08.981913 kubelet[3240]: W0213 20:14:08.981777 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:08.981913 kubelet[3240]: E0213 20:14:08.981844 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:09.020362 kubelet[3240]: W0213 20:14:09.020262 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-1780829b1e&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:09.020362 kubelet[3240]: E0213 20:14:09.020341 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-1780829b1e&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:09.111046 kubelet[3240]: W0213 20:14:09.110975 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:09.111046 kubelet[3240]: E0213 20:14:09.111046 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:09.158964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864924606.mount: Deactivated successfully. Feb 13 20:14:09.212771 containerd[1857]: time="2025-02-13T20:14:09.212717238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:09.223685 containerd[1857]: time="2025-02-13T20:14:09.223589363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 20:14:09.235620 containerd[1857]: time="2025-02-13T20:14:09.235513534Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:09.240704 containerd[1857]: time="2025-02-13T20:14:09.239948753Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:09.245629 containerd[1857]: time="2025-02-13T20:14:09.245584696Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:09.250003 containerd[1857]: time="2025-02-13T20:14:09.249947835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:14:09.253848 containerd[1857]: time="2025-02-13T20:14:09.253810371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:14:09.267483 containerd[1857]: time="2025-02-13T20:14:09.267378468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:09.268754 containerd[1857]: time="2025-02-13T20:14:09.268446433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 770.452892ms" Feb 13 20:14:09.281868 containerd[1857]: time="2025-02-13T20:14:09.281601688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 787.160602ms" Feb 13 20:14:09.291072 containerd[1857]: time="2025-02-13T20:14:09.291024888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 797.262125ms" Feb 13 20:14:09.458950 kubelet[3240]: E0213 20:14:09.458893 3240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-1780829b1e?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.478102958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.478170358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.478185598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.479075042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.480190806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.480241527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.480256567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:09.481592 containerd[1857]: time="2025-02-13T20:14:09.480749849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:09.489352 containerd[1857]: time="2025-02-13T20:14:09.488874963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:09.489352 containerd[1857]: time="2025-02-13T20:14:09.488939203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:09.489352 containerd[1857]: time="2025-02-13T20:14:09.488956763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:09.489352 containerd[1857]: time="2025-02-13T20:14:09.489044844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:09.559449 containerd[1857]: time="2025-02-13T20:14:09.558984779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-1780829b1e,Uid:3120b038374837b248804fb843122239,Namespace:kube-system,Attempt:0,} returns sandbox id \"062697fce95fc6b79eea61d66ea635d3033ab9bf89318b33ca021fae3d9e5001\"" Feb 13 20:14:09.569357 kubelet[3240]: I0213 20:14:09.568928 3240 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:09.569521 kubelet[3240]: E0213 20:14:09.569420 3240 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:09.576168 containerd[1857]: time="2025-02-13T20:14:09.576039691Z" level=info msg="CreateContainer within sandbox \"062697fce95fc6b79eea61d66ea635d3033ab9bf89318b33ca021fae3d9e5001\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:14:09.580353 containerd[1857]: time="2025-02-13T20:14:09.580241069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-1780829b1e,Uid:c18e23ab63eb500373637e4a6b6906f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef526f3e436ef115dcc0a21c4784f1e206ba7a1a5f4deaad8e69e1f09494546e\"" Feb 13 20:14:09.581193 containerd[1857]: time="2025-02-13T20:14:09.581004632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-1780829b1e,Uid:750a36f7b1c810856cd6ee65b9a5f8f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6c385bd5e511b23545e122cf2639f28c687bc15a884c932025dff2d17e95417\"" Feb 13 20:14:09.585192 containerd[1857]: time="2025-02-13T20:14:09.585145089Z" level=info msg="CreateContainer within sandbox \"ef526f3e436ef115dcc0a21c4784f1e206ba7a1a5f4deaad8e69e1f09494546e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:14:09.585710 containerd[1857]: time="2025-02-13T20:14:09.585456211Z" level=info msg="CreateContainer within sandbox \"f6c385bd5e511b23545e122cf2639f28c687bc15a884c932025dff2d17e95417\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:14:09.628126 kubelet[3240]: W0213 20:14:09.628021 3240 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:09.628126 kubelet[3240]: E0213 20:14:09.628089 3240 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 13 20:14:09.674453 containerd[1857]: time="2025-02-13T20:14:09.674396306Z" level=info msg="CreateContainer within sandbox \"ef526f3e436ef115dcc0a21c4784f1e206ba7a1a5f4deaad8e69e1f09494546e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"831ed99839d1c803d1502342d7449a115bc6af687488fd21e18815c697034c0d\"" Feb 13 20:14:09.679699 containerd[1857]: time="2025-02-13T20:14:09.679357527Z" level=info msg="CreateContainer within sandbox \"062697fce95fc6b79eea61d66ea635d3033ab9bf89318b33ca021fae3d9e5001\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f51f072ac634bbb02c915623674cbde85ea409faf2eed6280c3ab15cfafe5f4\"" Feb 13 20:14:09.679699 containerd[1857]: time="2025-02-13T20:14:09.679674088Z" level=info msg="StartContainer for \"831ed99839d1c803d1502342d7449a115bc6af687488fd21e18815c697034c0d\"" Feb 13 20:14:09.685885 containerd[1857]: time="2025-02-13T20:14:09.685834794Z" level=info msg="CreateContainer within sandbox \"f6c385bd5e511b23545e122cf2639f28c687bc15a884c932025dff2d17e95417\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df2a946976a18ebb005b7473f96c68fed698fe4b528b230591f6a6292366c208\"" Feb 13 20:14:09.686108 containerd[1857]: time="2025-02-13T20:14:09.686047195Z" level=info msg="StartContainer for \"5f51f072ac634bbb02c915623674cbde85ea409faf2eed6280c3ab15cfafe5f4\"" Feb 13 20:14:09.689665 containerd[1857]: time="2025-02-13T20:14:09.689155168Z" level=info msg="StartContainer for \"df2a946976a18ebb005b7473f96c68fed698fe4b528b230591f6a6292366c208\"" Feb 13 20:14:09.776112 containerd[1857]: time="2025-02-13T20:14:09.775849694Z" level=info msg="StartContainer for \"5f51f072ac634bbb02c915623674cbde85ea409faf2eed6280c3ab15cfafe5f4\" returns successfully" Feb 13 20:14:09.776112 containerd[1857]: time="2025-02-13T20:14:09.775984095Z" level=info msg="StartContainer for \"831ed99839d1c803d1502342d7449a115bc6af687488fd21e18815c697034c0d\" returns successfully" Feb 13 20:14:09.801593 containerd[1857]: time="2025-02-13T20:14:09.801541043Z" level=info msg="StartContainer for \"df2a946976a18ebb005b7473f96c68fed698fe4b528b230591f6a6292366c208\" returns successfully" Feb 13 20:14:11.177711 kubelet[3240]: I0213 20:14:11.176155 3240 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:11.750855 kubelet[3240]: E0213 20:14:11.750806 3240 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.1-a-1780829b1e\" not found" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:11.776049 kubelet[3240]: I0213 20:14:11.776019 3240 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:12.043489 kubelet[3240]: I0213 20:14:12.043188 3240 apiserver.go:52] "Watching apiserver" Feb 13 20:14:12.055234 kubelet[3240]: I0213 20:14:12.055196 3240 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:14:13.155877 kubelet[3240]: W0213 20:14:13.155840 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:14.108954 systemd[1]: Reloading requested from client PID 3516 ('systemctl') (unit session-9.scope)... Feb 13 20:14:14.108968 systemd[1]: Reloading... Feb 13 20:14:14.203947 zram_generator::config[3562]: No configuration found. Feb 13 20:14:14.306800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:14.387161 systemd[1]: Reloading finished in 277 ms. Feb 13 20:14:14.416974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:14.417114 kubelet[3240]: E0213 20:14:14.416923 3240 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152.2.1-a-1780829b1e.1823ddb46a56f7c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-1780829b1e,UID:ci-4152.2.1-a-1780829b1e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-1780829b1e,},FirstTimestamp:2025-02-13 20:14:08.043079621 +0000 UTC m=+1.357120409,LastTimestamp:2025-02-13 20:14:08.043079621 +0000 UTC m=+1.357120409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-1780829b1e,}" Feb 13 20:14:14.417724 kubelet[3240]: I0213 20:14:14.417677 3240 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:14:14.434003 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:14:14.434289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:14.442168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:14.642628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:14.647873 (kubelet)[3630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:14:14.692065 kubelet[3630]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:14.692065 kubelet[3630]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:14:14.692065 kubelet[3630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:14.692431 kubelet[3630]: I0213 20:14:14.692126 3630 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:14:14.696309 kubelet[3630]: I0213 20:14:14.696169 3630 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:14:14.696309 kubelet[3630]: I0213 20:14:14.696194 3630 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:14:14.696408 kubelet[3630]: I0213 20:14:14.696359 3630 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:14:14.697856 kubelet[3630]: I0213 20:14:14.697833 3630 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:14:14.699090 kubelet[3630]: I0213 20:14:14.699064 3630 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:14:14.713956 kubelet[3630]: I0213 20:14:14.713918 3630 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:14:14.714375 kubelet[3630]: I0213 20:14:14.714342 3630 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:14:14.714548 kubelet[3630]: I0213 20:14:14.714375 3630 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.1-a-1780829b1e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:14:14.714619 kubelet[3630]: I0213 20:14:14.714550 3630 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:14:14.714619 kubelet[3630]: I0213 20:14:14.714559 3630 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:14:14.714619 kubelet[3630]: I0213 20:14:14.714589 3630 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:14.714730 kubelet[3630]: I0213 20:14:14.714715 3630 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:14:14.714730 kubelet[3630]: I0213 20:14:14.714726 3630 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:14:14.714772 kubelet[3630]: I0213 20:14:14.714750 3630 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:14:14.714772 kubelet[3630]: I0213 20:14:14.714763 3630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:14:14.717157 kubelet[3630]: I0213 20:14:14.717102 3630 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:14:14.717438 kubelet[3630]: I0213 20:14:14.717425 3630 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:14:14.718139 kubelet[3630]: I0213 20:14:14.718125 3630 server.go:1264] "Started kubelet" Feb 13 20:14:14.725351 kubelet[3630]: I0213 20:14:14.724774 3630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:14:14.736411 kubelet[3630]: I0213 20:14:14.736367 3630 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:14:14.737609 kubelet[3630]: I0213 20:14:14.737590 3630 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:14:14.739691 kubelet[3630]: I0213 20:14:14.738517 3630 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:14:14.739691 kubelet[3630]: I0213 20:14:14.738755 3630 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:14:14.740316 kubelet[3630]: I0213 20:14:14.740301 3630 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:14:14.743007 kubelet[3630]: I0213 20:14:14.742991 3630 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:14:14.743238 kubelet[3630]: I0213 20:14:14.743227 3630 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:14:14.748913 kubelet[3630]: I0213 20:14:14.748882 3630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:14:14.750022 kubelet[3630]: I0213 20:14:14.749982 3630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:14:14.750132 kubelet[3630]: I0213 20:14:14.750122 3630 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:14:14.750456 kubelet[3630]: I0213 20:14:14.750442 3630 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:14:14.750568 kubelet[3630]: E0213 20:14:14.750551 3630 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:14:14.763872 kubelet[3630]: I0213 20:14:14.763836 3630 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:14:14.763872 kubelet[3630]: I0213 20:14:14.763861 3630 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:14:14.764039 kubelet[3630]: I0213 20:14:14.763925 3630 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:14:14.815414 kubelet[3630]: I0213 20:14:14.815388 3630 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:14:14.848721 kubelet[3630]: I0213 20:14:14.815533 3630 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:14:14.848721 kubelet[3630]: I0213 20:14:14.815555 3630 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:14.848721 kubelet[3630]: I0213 20:14:14.844196 3630 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:14.849201 kubelet[3630]: I0213 20:14:14.849057 3630 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:14:14.849201 kubelet[3630]: I0213 20:14:14.849079 3630 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:14:14.849201 kubelet[3630]: I0213 20:14:14.849100 3630 policy_none.go:49] "None policy: Start" Feb 13 20:14:14.850101 kubelet[3630]: I0213 20:14:14.850085 3630 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:14:14.850296 kubelet[3630]: I0213 20:14:14.850285 3630 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:14:14.850590 kubelet[3630]: I0213 20:14:14.850510 3630 state_mem.go:75] "Updated machine memory state" Feb 13 20:14:14.850767 kubelet[3630]: E0213 20:14:14.850753 3630 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:14:14.854279 kubelet[3630]: I0213 20:14:14.853024 3630 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:14:14.854279 kubelet[3630]: I0213 20:14:14.853190 3630 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:14:14.854279 kubelet[3630]: I0213 20:14:14.853281 3630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:14:14.867061 kubelet[3630]: I0213 20:14:14.866047 3630 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:14.867061 kubelet[3630]: I0213 20:14:14.866922 3630 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.052591 kubelet[3630]: I0213 20:14:15.052147 3630 topology_manager.go:215] "Topology Admit Handler" podUID="c18e23ab63eb500373637e4a6b6906f7" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.052591 kubelet[3630]: I0213 20:14:15.052422 3630 topology_manager.go:215] "Topology Admit Handler" podUID="3120b038374837b248804fb843122239" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.052591 kubelet[3630]: I0213 20:14:15.052505 3630 topology_manager.go:215] "Topology Admit Handler" podUID="750a36f7b1c810856cd6ee65b9a5f8f7" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.060454 kubelet[3630]: W0213 20:14:15.060234 3630 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:15.064042 kubelet[3630]: W0213 20:14:15.063777 3630 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:15.064042 kubelet[3630]: W0213 20:14:15.063948 3630 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:15.064042 kubelet[3630]: E0213 20:14:15.063993 3630 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145132 kubelet[3630]: I0213 20:14:15.144856 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145132 kubelet[3630]: I0213 20:14:15.144894 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3120b038374837b248804fb843122239-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-1780829b1e\" (UID: \"3120b038374837b248804fb843122239\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145132 kubelet[3630]: I0213 20:14:15.144915 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145132 kubelet[3630]: I0213 20:14:15.144935 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145132 kubelet[3630]: I0213 20:14:15.144952 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145341 kubelet[3630]: I0213 20:14:15.144993 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c18e23ab63eb500373637e4a6b6906f7-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-1780829b1e\" (UID: \"c18e23ab63eb500373637e4a6b6906f7\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145341 kubelet[3630]: I0213 20:14:15.145009 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3120b038374837b248804fb843122239-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-1780829b1e\" (UID: \"3120b038374837b248804fb843122239\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145341 kubelet[3630]: I0213 20:14:15.145024 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3120b038374837b248804fb843122239-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-1780829b1e\" (UID: \"3120b038374837b248804fb843122239\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.145341 kubelet[3630]: I0213 20:14:15.145041 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/750a36f7b1c810856cd6ee65b9a5f8f7-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-1780829b1e\" (UID: \"750a36f7b1c810856cd6ee65b9a5f8f7\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.716071 kubelet[3630]: I0213 20:14:15.715847 3630 apiserver.go:52] "Watching apiserver" Feb 13 20:14:15.744216 kubelet[3630]: I0213 20:14:15.744130 3630 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:14:15.808675 kubelet[3630]: I0213 20:14:15.808580 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.1-a-1780829b1e" podStartSLOduration=0.80856406 podStartE2EDuration="808.56406ms" podCreationTimestamp="2025-02-13 20:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:15.808385699 +0000 UTC m=+1.157392204" watchObservedRunningTime="2025-02-13 20:14:15.80856406 +0000 UTC m=+1.157570565" Feb 13 20:14:15.827885 kubelet[3630]: W0213 20:14:15.827682 3630 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:15.827885 kubelet[3630]: E0213 20:14:15.827768 3630 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.1-a-1780829b1e\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" Feb 13 20:14:15.858005 kubelet[3630]: I0213 20:14:15.857943 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.1-a-1780829b1e" podStartSLOduration=0.857923906 podStartE2EDuration="857.923906ms" podCreationTimestamp="2025-02-13 20:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:15.837707822 +0000 UTC m=+1.186714327" watchObservedRunningTime="2025-02-13 20:14:15.857923906 +0000 UTC m=+1.206930411" Feb 13 20:14:15.875447 kubelet[3630]: I0213 20:14:15.875351 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.1-a-1780829b1e" podStartSLOduration=2.875335659 podStartE2EDuration="2.875335659s" podCreationTimestamp="2025-02-13 20:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:15.85887175 +0000 UTC m=+1.207878255" watchObservedRunningTime="2025-02-13 20:14:15.875335659 +0000 UTC m=+1.224342164" Feb 13 20:14:19.597502 sudo[2591]: pam_unix(sudo:session): session closed for user root Feb 13 20:14:19.667673 sshd[2590]: Connection closed by 10.200.16.10 port 42050 Feb 13 20:14:19.668235 sshd-session[2587]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:19.671951 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:42050.service: Deactivated successfully. Feb 13 20:14:19.675243 systemd-logind[1798]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:14:19.675741 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:14:19.677039 systemd-logind[1798]: Removed session 9. Feb 13 20:14:29.511108 kubelet[3630]: I0213 20:14:29.511026 3630 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:14:29.515074 containerd[1857]: time="2025-02-13T20:14:29.513384782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:14:29.517425 kubelet[3630]: I0213 20:14:29.514278 3630 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:14:30.412504 kubelet[3630]: I0213 20:14:30.412457 3630 topology_manager.go:215] "Topology Admit Handler" podUID="1310cc1a-0ac2-4139-ab80-55aae9cdd0b6" podNamespace="kube-system" podName="kube-proxy-r2m4z" Feb 13 20:14:30.432181 kubelet[3630]: I0213 20:14:30.432044 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1310cc1a-0ac2-4139-ab80-55aae9cdd0b6-kube-proxy\") pod \"kube-proxy-r2m4z\" (UID: \"1310cc1a-0ac2-4139-ab80-55aae9cdd0b6\") " pod="kube-system/kube-proxy-r2m4z" Feb 13 20:14:30.432181 kubelet[3630]: I0213 20:14:30.432080 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1310cc1a-0ac2-4139-ab80-55aae9cdd0b6-xtables-lock\") pod \"kube-proxy-r2m4z\" (UID: \"1310cc1a-0ac2-4139-ab80-55aae9cdd0b6\") " pod="kube-system/kube-proxy-r2m4z" Feb 13 20:14:30.432181 kubelet[3630]: I0213 20:14:30.432098 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1310cc1a-0ac2-4139-ab80-55aae9cdd0b6-lib-modules\") pod \"kube-proxy-r2m4z\" (UID: \"1310cc1a-0ac2-4139-ab80-55aae9cdd0b6\") " pod="kube-system/kube-proxy-r2m4z" Feb 13 20:14:30.432181 kubelet[3630]: I0213 20:14:30.432118 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp49c\" (UniqueName: \"kubernetes.io/projected/1310cc1a-0ac2-4139-ab80-55aae9cdd0b6-kube-api-access-dp49c\") pod \"kube-proxy-r2m4z\" (UID: \"1310cc1a-0ac2-4139-ab80-55aae9cdd0b6\") " pod="kube-system/kube-proxy-r2m4z" Feb 13 20:14:30.598667 kubelet[3630]: I0213 20:14:30.594934 3630 topology_manager.go:215] "Topology Admit Handler" podUID="f13799e3-2bdb-47f4-9fec-3a2daf1d3e4f" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-5b6p7" Feb 13 20:14:30.634170 kubelet[3630]: I0213 20:14:30.634117 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzhg\" (UniqueName: \"kubernetes.io/projected/f13799e3-2bdb-47f4-9fec-3a2daf1d3e4f-kube-api-access-ttzhg\") pod \"tigera-operator-7bc55997bb-5b6p7\" (UID: \"f13799e3-2bdb-47f4-9fec-3a2daf1d3e4f\") " pod="tigera-operator/tigera-operator-7bc55997bb-5b6p7" Feb 13 20:14:30.634170 kubelet[3630]: I0213 20:14:30.634175 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f13799e3-2bdb-47f4-9fec-3a2daf1d3e4f-var-lib-calico\") pod \"tigera-operator-7bc55997bb-5b6p7\" (UID: \"f13799e3-2bdb-47f4-9fec-3a2daf1d3e4f\") " pod="tigera-operator/tigera-operator-7bc55997bb-5b6p7" Feb 13 20:14:30.717628 containerd[1857]: time="2025-02-13T20:14:30.717454277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2m4z,Uid:1310cc1a-0ac2-4139-ab80-55aae9cdd0b6,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:30.902103 containerd[1857]: time="2025-02-13T20:14:30.901851623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5b6p7,Uid:f13799e3-2bdb-47f4-9fec-3a2daf1d3e4f,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:14:31.495673 containerd[1857]: time="2025-02-13T20:14:31.495481219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:31.495673 containerd[1857]: time="2025-02-13T20:14:31.495565779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:31.495673 containerd[1857]: time="2025-02-13T20:14:31.495591019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:31.496233 containerd[1857]: time="2025-02-13T20:14:31.496106781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:31.505397 containerd[1857]: time="2025-02-13T20:14:31.505113536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:31.506240 containerd[1857]: time="2025-02-13T20:14:31.506196300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:31.506293 containerd[1857]: time="2025-02-13T20:14:31.506269140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:31.506732 containerd[1857]: time="2025-02-13T20:14:31.506684262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:31.551152 containerd[1857]: time="2025-02-13T20:14:31.551085232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2m4z,Uid:1310cc1a-0ac2-4139-ab80-55aae9cdd0b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"965594b5c26473b3eaa577c104a9bac06ac92efbe87298db6e247cdff24f211e\"" Feb 13 20:14:31.557791 containerd[1857]: time="2025-02-13T20:14:31.557566297Z" level=info msg="CreateContainer within sandbox \"965594b5c26473b3eaa577c104a9bac06ac92efbe87298db6e247cdff24f211e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:14:31.560878 containerd[1857]: time="2025-02-13T20:14:31.560775189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5b6p7,Uid:f13799e3-2bdb-47f4-9fec-3a2daf1d3e4f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"44215686774c9b6248b21ca7f8c81deef8fcf4eb226f5e70dea6e0aa7653b7b0\"" Feb 13 20:14:31.562324 containerd[1857]: time="2025-02-13T20:14:31.562295995Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:14:31.615375 containerd[1857]: time="2025-02-13T20:14:31.615258198Z" level=info msg="CreateContainer within sandbox \"965594b5c26473b3eaa577c104a9bac06ac92efbe87298db6e247cdff24f211e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68718fd9eb3865db91851b10229d3ec79c6b898d2bca5942612e05f0df508fea\"" Feb 13 20:14:31.618473 containerd[1857]: time="2025-02-13T20:14:31.616273922Z" level=info msg="StartContainer for \"68718fd9eb3865db91851b10229d3ec79c6b898d2bca5942612e05f0df508fea\"" Feb 13 20:14:31.686843 containerd[1857]: time="2025-02-13T20:14:31.686658711Z" level=info msg="StartContainer for \"68718fd9eb3865db91851b10229d3ec79c6b898d2bca5942612e05f0df508fea\" returns successfully" Feb 13 20:14:31.844174 kubelet[3630]: I0213 20:14:31.843988 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r2m4z" podStartSLOduration=1.843968754 podStartE2EDuration="1.843968754s" podCreationTimestamp="2025-02-13 20:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:31.843565953 +0000 UTC m=+17.192572458" watchObservedRunningTime="2025-02-13 20:14:31.843968754 +0000 UTC m=+17.192975219" Feb 13 20:14:33.003455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402271351.mount: Deactivated successfully. Feb 13 20:14:33.587606 containerd[1857]: time="2025-02-13T20:14:33.587476677Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:33.591353 containerd[1857]: time="2025-02-13T20:14:33.591128611Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 20:14:33.595927 containerd[1857]: time="2025-02-13T20:14:33.595863229Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:33.603050 containerd[1857]: time="2025-02-13T20:14:33.602968616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:33.604104 containerd[1857]: time="2025-02-13T20:14:33.603931780Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.041257784s" Feb 13 20:14:33.604104 containerd[1857]: time="2025-02-13T20:14:33.603964060Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 20:14:33.606594 containerd[1857]: time="2025-02-13T20:14:33.606479469Z" level=info msg="CreateContainer within sandbox \"44215686774c9b6248b21ca7f8c81deef8fcf4eb226f5e70dea6e0aa7653b7b0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:14:33.653288 containerd[1857]: time="2025-02-13T20:14:33.653240089Z" level=info msg="CreateContainer within sandbox \"44215686774c9b6248b21ca7f8c81deef8fcf4eb226f5e70dea6e0aa7653b7b0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6083b8aab682b8b216343cbd80ec6b627b409ad2e831a8fc66fedebb7f3329b5\"" Feb 13 20:14:33.654779 containerd[1857]: time="2025-02-13T20:14:33.654662574Z" level=info msg="StartContainer for \"6083b8aab682b8b216343cbd80ec6b627b409ad2e831a8fc66fedebb7f3329b5\"" Feb 13 20:14:33.708408 containerd[1857]: time="2025-02-13T20:14:33.708358060Z" level=info msg="StartContainer for \"6083b8aab682b8b216343cbd80ec6b627b409ad2e831a8fc66fedebb7f3329b5\" returns successfully" Feb 13 20:14:37.438163 kubelet[3630]: I0213 20:14:37.433949 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-5b6p7" podStartSLOduration=5.390798096 podStartE2EDuration="7.433928727s" podCreationTimestamp="2025-02-13 20:14:30 +0000 UTC" firstStartedPulling="2025-02-13 20:14:31.561933473 +0000 UTC m=+16.910939938" lastFinishedPulling="2025-02-13 20:14:33.605064064 +0000 UTC m=+18.954070569" observedRunningTime="2025-02-13 20:14:33.847181512 +0000 UTC m=+19.196188017" watchObservedRunningTime="2025-02-13 20:14:37.433928727 +0000 UTC m=+22.782935192" Feb 13 20:14:37.438163 kubelet[3630]: I0213 20:14:37.434094 3630 topology_manager.go:215] "Topology Admit Handler" podUID="a47eff5e-448a-4889-b525-7af8e8e67ae5" podNamespace="calico-system" podName="calico-typha-798dcdf876-l4h8t" Feb 13 20:14:37.471488 kubelet[3630]: I0213 20:14:37.471439 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsplf\" (UniqueName: \"kubernetes.io/projected/a47eff5e-448a-4889-b525-7af8e8e67ae5-kube-api-access-nsplf\") pod \"calico-typha-798dcdf876-l4h8t\" (UID: \"a47eff5e-448a-4889-b525-7af8e8e67ae5\") " pod="calico-system/calico-typha-798dcdf876-l4h8t" Feb 13 20:14:37.471488 kubelet[3630]: I0213 20:14:37.471483 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a47eff5e-448a-4889-b525-7af8e8e67ae5-tigera-ca-bundle\") pod \"calico-typha-798dcdf876-l4h8t\" (UID: \"a47eff5e-448a-4889-b525-7af8e8e67ae5\") " pod="calico-system/calico-typha-798dcdf876-l4h8t" Feb 13 20:14:37.471654 kubelet[3630]: I0213 20:14:37.471503 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a47eff5e-448a-4889-b525-7af8e8e67ae5-typha-certs\") pod \"calico-typha-798dcdf876-l4h8t\" (UID: \"a47eff5e-448a-4889-b525-7af8e8e67ae5\") " pod="calico-system/calico-typha-798dcdf876-l4h8t" Feb 13 20:14:37.539359 kubelet[3630]: I0213 20:14:37.538949 3630 topology_manager.go:215] "Topology Admit Handler" podUID="1572950e-c233-4a41-be6d-4562c1a7f2d6" podNamespace="calico-system" podName="calico-node-w9plx" Feb 13 20:14:37.572178 kubelet[3630]: I0213 20:14:37.572086 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-lib-modules\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.572178 kubelet[3630]: I0213 20:14:37.572143 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1572950e-c233-4a41-be6d-4562c1a7f2d6-tigera-ca-bundle\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.573138 kubelet[3630]: I0213 20:14:37.572264 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-var-lib-calico\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.573138 kubelet[3630]: I0213 20:14:37.572285 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-cni-bin-dir\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.573138 kubelet[3630]: I0213 20:14:37.572346 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-cni-log-dir\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.573138 kubelet[3630]: I0213 20:14:37.572366 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-var-run-calico\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.573138 kubelet[3630]: I0213 20:14:37.572417 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-xtables-lock\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.574505 kubelet[3630]: I0213 20:14:37.572436 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-flexvol-driver-host\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.574505 kubelet[3630]: I0213 20:14:37.572454 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtbmk\" (UniqueName: \"kubernetes.io/projected/1572950e-c233-4a41-be6d-4562c1a7f2d6-kube-api-access-mtbmk\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.574505 kubelet[3630]: I0213 20:14:37.572468 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-policysync\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.574505 kubelet[3630]: I0213 20:14:37.572484 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1572950e-c233-4a41-be6d-4562c1a7f2d6-node-certs\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.574505 kubelet[3630]: I0213 20:14:37.572510 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1572950e-c233-4a41-be6d-4562c1a7f2d6-cni-net-dir\") pod \"calico-node-w9plx\" (UID: \"1572950e-c233-4a41-be6d-4562c1a7f2d6\") " pod="calico-system/calico-node-w9plx" Feb 13 20:14:37.675699 kubelet[3630]: I0213 20:14:37.673780 3630 topology_manager.go:215] "Topology Admit Handler" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" podNamespace="calico-system" podName="csi-node-driver-lvfcz" Feb 13 20:14:37.677425 kubelet[3630]: E0213 20:14:37.677376 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:37.683628 kubelet[3630]: E0213 20:14:37.683453 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.683628 kubelet[3630]: W0213 20:14:37.683478 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.683628 kubelet[3630]: E0213 20:14:37.683498 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.698859 kubelet[3630]: E0213 20:14:37.698775 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.698859 kubelet[3630]: W0213 20:14:37.698796 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.698859 kubelet[3630]: E0213 20:14:37.698827 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.742304 containerd[1857]: time="2025-02-13T20:14:37.742207943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-798dcdf876-l4h8t,Uid:a47eff5e-448a-4889-b525-7af8e8e67ae5,Namespace:calico-system,Attempt:0,}" Feb 13 20:14:37.772181 kubelet[3630]: E0213 20:14:37.772040 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.772181 kubelet[3630]: W0213 20:14:37.772064 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.772181 kubelet[3630]: E0213 20:14:37.772084 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.772604 kubelet[3630]: E0213 20:14:37.772549 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.772604 kubelet[3630]: W0213 20:14:37.772561 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.772604 kubelet[3630]: E0213 20:14:37.772595 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.773122 kubelet[3630]: E0213 20:14:37.772861 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.773122 kubelet[3630]: W0213 20:14:37.772872 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.773122 kubelet[3630]: E0213 20:14:37.772903 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.773235 kubelet[3630]: E0213 20:14:37.773159 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.773235 kubelet[3630]: W0213 20:14:37.773169 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.773235 kubelet[3630]: E0213 20:14:37.773200 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.773415 kubelet[3630]: E0213 20:14:37.773399 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.773447 kubelet[3630]: W0213 20:14:37.773436 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.773584 kubelet[3630]: E0213 20:14:37.773448 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.773634 kubelet[3630]: E0213 20:14:37.773617 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.773692 kubelet[3630]: W0213 20:14:37.773632 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.773692 kubelet[3630]: E0213 20:14:37.773677 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.773890 kubelet[3630]: E0213 20:14:37.773874 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.773890 kubelet[3630]: W0213 20:14:37.773888 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.773986 kubelet[3630]: E0213 20:14:37.773904 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.774184 kubelet[3630]: E0213 20:14:37.774113 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.774184 kubelet[3630]: W0213 20:14:37.774129 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.774184 kubelet[3630]: E0213 20:14:37.774163 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.774412 kubelet[3630]: E0213 20:14:37.774376 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.774447 kubelet[3630]: W0213 20:14:37.774412 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.774447 kubelet[3630]: E0213 20:14:37.774423 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.774627 kubelet[3630]: E0213 20:14:37.774612 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.774692 kubelet[3630]: W0213 20:14:37.774625 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.774692 kubelet[3630]: E0213 20:14:37.774651 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.774850 kubelet[3630]: E0213 20:14:37.774836 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.774850 kubelet[3630]: W0213 20:14:37.774849 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.774901 kubelet[3630]: E0213 20:14:37.774859 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.775079 kubelet[3630]: E0213 20:14:37.775063 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.775114 kubelet[3630]: W0213 20:14:37.775081 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.775114 kubelet[3630]: E0213 20:14:37.775091 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.775512 kubelet[3630]: E0213 20:14:37.775493 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.775512 kubelet[3630]: W0213 20:14:37.775508 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.775596 kubelet[3630]: E0213 20:14:37.775519 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.775765 kubelet[3630]: E0213 20:14:37.775750 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.775765 kubelet[3630]: W0213 20:14:37.775764 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.775838 kubelet[3630]: E0213 20:14:37.775774 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.775948 kubelet[3630]: E0213 20:14:37.775934 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.775948 kubelet[3630]: W0213 20:14:37.775947 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.776000 kubelet[3630]: E0213 20:14:37.775956 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.776128 kubelet[3630]: E0213 20:14:37.776115 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.776128 kubelet[3630]: W0213 20:14:37.776126 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.776189 kubelet[3630]: E0213 20:14:37.776135 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.776371 kubelet[3630]: E0213 20:14:37.776356 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.776371 kubelet[3630]: W0213 20:14:37.776370 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.776446 kubelet[3630]: E0213 20:14:37.776380 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.776534 kubelet[3630]: E0213 20:14:37.776521 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.776565 kubelet[3630]: W0213 20:14:37.776550 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.776565 kubelet[3630]: E0213 20:14:37.776562 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.776754 kubelet[3630]: E0213 20:14:37.776740 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.776754 kubelet[3630]: W0213 20:14:37.776752 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.776840 kubelet[3630]: E0213 20:14:37.776761 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.776986 kubelet[3630]: E0213 20:14:37.776972 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.776986 kubelet[3630]: W0213 20:14:37.776985 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.777052 kubelet[3630]: E0213 20:14:37.776993 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.777290 kubelet[3630]: E0213 20:14:37.777274 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.777290 kubelet[3630]: W0213 20:14:37.777289 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.777362 kubelet[3630]: E0213 20:14:37.777298 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.777362 kubelet[3630]: I0213 20:14:37.777325 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7a0204f2-4591-46c1-b244-ab8ca59a1ddb-varrun\") pod \"csi-node-driver-lvfcz\" (UID: \"7a0204f2-4591-46c1-b244-ab8ca59a1ddb\") " pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:37.777498 kubelet[3630]: E0213 20:14:37.777481 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.777498 kubelet[3630]: W0213 20:14:37.777495 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.777588 kubelet[3630]: E0213 20:14:37.777509 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.777588 kubelet[3630]: I0213 20:14:37.777525 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a0204f2-4591-46c1-b244-ab8ca59a1ddb-registration-dir\") pod \"csi-node-driver-lvfcz\" (UID: \"7a0204f2-4591-46c1-b244-ab8ca59a1ddb\") " pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:37.777809 kubelet[3630]: E0213 20:14:37.777779 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.777809 kubelet[3630]: W0213 20:14:37.777798 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.777809 kubelet[3630]: E0213 20:14:37.777809 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.777939 kubelet[3630]: I0213 20:14:37.777824 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a0204f2-4591-46c1-b244-ab8ca59a1ddb-socket-dir\") pod \"csi-node-driver-lvfcz\" (UID: \"7a0204f2-4591-46c1-b244-ab8ca59a1ddb\") " pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:37.778009 kubelet[3630]: E0213 20:14:37.777986 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.778009 kubelet[3630]: W0213 20:14:37.778000 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.778009 kubelet[3630]: E0213 20:14:37.778010 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.778095 kubelet[3630]: I0213 20:14:37.778024 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgt7r\" (UniqueName: \"kubernetes.io/projected/7a0204f2-4591-46c1-b244-ab8ca59a1ddb-kube-api-access-kgt7r\") pod \"csi-node-driver-lvfcz\" (UID: \"7a0204f2-4591-46c1-b244-ab8ca59a1ddb\") " pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:37.778212 kubelet[3630]: E0213 20:14:37.778172 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.778268 kubelet[3630]: W0213 20:14:37.778216 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.778268 kubelet[3630]: E0213 20:14:37.778255 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.778268 kubelet[3630]: I0213 20:14:37.778275 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a0204f2-4591-46c1-b244-ab8ca59a1ddb-kubelet-dir\") pod \"csi-node-driver-lvfcz\" (UID: \"7a0204f2-4591-46c1-b244-ab8ca59a1ddb\") " pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:37.778558 kubelet[3630]: E0213 20:14:37.778545 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.778718 kubelet[3630]: W0213 20:14:37.778593 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.778718 kubelet[3630]: E0213 20:14:37.778616 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.784322 kubelet[3630]: E0213 20:14:37.784169 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.784322 kubelet[3630]: W0213 20:14:37.784193 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.785271 kubelet[3630]: E0213 20:14:37.784223 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.785860 kubelet[3630]: E0213 20:14:37.785833 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.785860 kubelet[3630]: W0213 20:14:37.785855 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.785997 kubelet[3630]: E0213 20:14:37.785915 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.786114 kubelet[3630]: E0213 20:14:37.786099 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.786114 kubelet[3630]: W0213 20:14:37.786112 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.786242 kubelet[3630]: E0213 20:14:37.786184 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.788197 kubelet[3630]: E0213 20:14:37.787305 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.788197 kubelet[3630]: W0213 20:14:37.787324 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.788386 kubelet[3630]: E0213 20:14:37.788244 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.788747 kubelet[3630]: E0213 20:14:37.787936 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.788747 kubelet[3630]: W0213 20:14:37.788709 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.789965 kubelet[3630]: E0213 20:14:37.789579 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.789965 kubelet[3630]: E0213 20:14:37.789597 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.789965 kubelet[3630]: W0213 20:14:37.789610 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.789965 kubelet[3630]: E0213 20:14:37.789799 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.789965 kubelet[3630]: W0213 20:14:37.789810 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.789965 kubelet[3630]: E0213 20:14:37.789821 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.789965 kubelet[3630]: E0213 20:14:37.789937 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.790356 kubelet[3630]: E0213 20:14:37.790154 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.790356 kubelet[3630]: W0213 20:14:37.790173 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.790356 kubelet[3630]: E0213 20:14:37.790186 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.791063 kubelet[3630]: E0213 20:14:37.791036 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.791063 kubelet[3630]: W0213 20:14:37.791056 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.791148 kubelet[3630]: E0213 20:14:37.791072 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.802472 containerd[1857]: time="2025-02-13T20:14:37.801722530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:37.802472 containerd[1857]: time="2025-02-13T20:14:37.801937211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:37.803385 containerd[1857]: time="2025-02-13T20:14:37.802072572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:37.803385 containerd[1857]: time="2025-02-13T20:14:37.802626414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:37.849562 containerd[1857]: time="2025-02-13T20:14:37.849347312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w9plx,Uid:1572950e-c233-4a41-be6d-4562c1a7f2d6,Namespace:calico-system,Attempt:0,}" Feb 13 20:14:37.856337 containerd[1857]: time="2025-02-13T20:14:37.856306299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-798dcdf876-l4h8t,Uid:a47eff5e-448a-4889-b525-7af8e8e67ae5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bfb7a5f2f50cdd7aad9f59d305170d5d1bae5e8908d6f3350fc21e25b9d3aff\"" Feb 13 20:14:37.858178 containerd[1857]: time="2025-02-13T20:14:37.858140346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:14:37.879590 kubelet[3630]: E0213 20:14:37.879555 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.879590 kubelet[3630]: W0213 20:14:37.879590 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.879873 kubelet[3630]: E0213 20:14:37.879612 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.879914 kubelet[3630]: E0213 20:14:37.879894 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.879914 kubelet[3630]: W0213 20:14:37.879903 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.879914 kubelet[3630]: E0213 20:14:37.879919 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.880156 kubelet[3630]: E0213 20:14:37.880143 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.880259 kubelet[3630]: W0213 20:14:37.880186 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.880259 kubelet[3630]: E0213 20:14:37.880209 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.880482 kubelet[3630]: E0213 20:14:37.880460 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.880482 kubelet[3630]: W0213 20:14:37.880477 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.880564 kubelet[3630]: E0213 20:14:37.880496 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.880817 kubelet[3630]: E0213 20:14:37.880799 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.880817 kubelet[3630]: W0213 20:14:37.880815 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.880926 kubelet[3630]: E0213 20:14:37.880837 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.881065 kubelet[3630]: E0213 20:14:37.881047 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.881065 kubelet[3630]: W0213 20:14:37.881061 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.881129 kubelet[3630]: E0213 20:14:37.881072 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.881277 kubelet[3630]: E0213 20:14:37.881262 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.881277 kubelet[3630]: W0213 20:14:37.881276 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.881345 kubelet[3630]: E0213 20:14:37.881290 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.881475 kubelet[3630]: E0213 20:14:37.881456 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.881475 kubelet[3630]: W0213 20:14:37.881468 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.881611 kubelet[3630]: E0213 20:14:37.881546 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.881665 kubelet[3630]: E0213 20:14:37.881614 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.881665 kubelet[3630]: W0213 20:14:37.881623 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.881774 kubelet[3630]: E0213 20:14:37.881747 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.881899 kubelet[3630]: E0213 20:14:37.881887 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.881927 kubelet[3630]: W0213 20:14:37.881900 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.881927 kubelet[3630]: E0213 20:14:37.881921 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.882118 kubelet[3630]: E0213 20:14:37.882103 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.882118 kubelet[3630]: W0213 20:14:37.882115 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.882266 kubelet[3630]: E0213 20:14:37.882197 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.882266 kubelet[3630]: E0213 20:14:37.882262 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.882438 kubelet[3630]: W0213 20:14:37.882268 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.882438 kubelet[3630]: E0213 20:14:37.882298 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.882438 kubelet[3630]: E0213 20:14:37.882408 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.882438 kubelet[3630]: W0213 20:14:37.882415 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.882673 kubelet[3630]: E0213 20:14:37.882558 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.882760 kubelet[3630]: E0213 20:14:37.882745 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.882760 kubelet[3630]: W0213 20:14:37.882758 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.882829 kubelet[3630]: E0213 20:14:37.882770 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.883092 kubelet[3630]: E0213 20:14:37.883024 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.883092 kubelet[3630]: W0213 20:14:37.883041 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.883160 kubelet[3630]: E0213 20:14:37.883121 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.883385 kubelet[3630]: E0213 20:14:37.883238 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.883385 kubelet[3630]: W0213 20:14:37.883248 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.883385 kubelet[3630]: E0213 20:14:37.883371 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.883385 kubelet[3630]: W0213 20:14:37.883384 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.883518 kubelet[3630]: E0213 20:14:37.883369 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.883518 kubelet[3630]: E0213 20:14:37.883458 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.884120 kubelet[3630]: E0213 20:14:37.883573 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.884120 kubelet[3630]: W0213 20:14:37.883584 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.884270 kubelet[3630]: E0213 20:14:37.884235 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.884420 kubelet[3630]: E0213 20:14:37.884409 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.884560 kubelet[3630]: W0213 20:14:37.884477 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.884652 kubelet[3630]: E0213 20:14:37.884612 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.884912 kubelet[3630]: E0213 20:14:37.884826 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.884912 kubelet[3630]: W0213 20:14:37.884840 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.884912 kubelet[3630]: E0213 20:14:37.884884 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.885195 kubelet[3630]: E0213 20:14:37.885181 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.885330 kubelet[3630]: W0213 20:14:37.885249 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.885330 kubelet[3630]: E0213 20:14:37.885289 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.886848 kubelet[3630]: E0213 20:14:37.886824 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.887468 kubelet[3630]: W0213 20:14:37.886946 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.887468 kubelet[3630]: E0213 20:14:37.887036 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.889355 kubelet[3630]: E0213 20:14:37.889246 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.889355 kubelet[3630]: W0213 20:14:37.889265 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.889355 kubelet[3630]: E0213 20:14:37.889298 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.891441 kubelet[3630]: E0213 20:14:37.890945 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.891441 kubelet[3630]: W0213 20:14:37.890959 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.891441 kubelet[3630]: E0213 20:14:37.890975 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.891769 kubelet[3630]: E0213 20:14:37.891710 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.891769 kubelet[3630]: W0213 20:14:37.891735 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.891769 kubelet[3630]: E0213 20:14:37.891748 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.900392 kubelet[3630]: E0213 20:14:37.900368 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:37.900663 kubelet[3630]: W0213 20:14:37.900529 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:37.900663 kubelet[3630]: E0213 20:14:37.900552 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:37.900950 containerd[1857]: time="2025-02-13T20:14:37.900870789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:37.901026 containerd[1857]: time="2025-02-13T20:14:37.901007229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:37.901486 containerd[1857]: time="2025-02-13T20:14:37.901438631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:37.901633 containerd[1857]: time="2025-02-13T20:14:37.901601152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:37.933762 containerd[1857]: time="2025-02-13T20:14:37.933715034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w9plx,Uid:1572950e-c233-4a41-be6d-4562c1a7f2d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"2068a080e6d23ca2ec89c4b9e165fe064c5080ec10ca3e1acc3f0b318e74e704\"" Feb 13 20:14:39.324417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806729590.mount: Deactivated successfully. Feb 13 20:14:39.752854 kubelet[3630]: E0213 20:14:39.751740 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:39.754775 containerd[1857]: time="2025-02-13T20:14:39.754735464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:39.757841 containerd[1857]: time="2025-02-13T20:14:39.757774116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 20:14:39.765569 containerd[1857]: time="2025-02-13T20:14:39.765523025Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:39.772468 containerd[1857]: time="2025-02-13T20:14:39.772237571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:39.775856 containerd[1857]: time="2025-02-13T20:14:39.774622140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.916442474s" Feb 13 20:14:39.775856 containerd[1857]: time="2025-02-13T20:14:39.774685980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 20:14:39.779677 containerd[1857]: time="2025-02-13T20:14:39.779425078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:14:39.794225 containerd[1857]: time="2025-02-13T20:14:39.794086574Z" level=info msg="CreateContainer within sandbox \"6bfb7a5f2f50cdd7aad9f59d305170d5d1bae5e8908d6f3350fc21e25b9d3aff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:14:39.851117 containerd[1857]: time="2025-02-13T20:14:39.851059312Z" level=info msg="CreateContainer within sandbox \"6bfb7a5f2f50cdd7aad9f59d305170d5d1bae5e8908d6f3350fc21e25b9d3aff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f9c608c37f0933a81c5f8a82eb5f50de5677ac0121c4896232ccc6d4ccf66c01\"" Feb 13 20:14:39.852382 containerd[1857]: time="2025-02-13T20:14:39.852330756Z" level=info msg="StartContainer for \"f9c608c37f0933a81c5f8a82eb5f50de5677ac0121c4896232ccc6d4ccf66c01\"" Feb 13 20:14:39.923971 containerd[1857]: time="2025-02-13T20:14:39.923918110Z" level=info msg="StartContainer for \"f9c608c37f0933a81c5f8a82eb5f50de5677ac0121c4896232ccc6d4ccf66c01\" returns successfully" Feb 13 20:14:40.855349 kubelet[3630]: I0213 20:14:40.855251 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-798dcdf876-l4h8t" podStartSLOduration=1.934158532 podStartE2EDuration="3.855231224s" podCreationTimestamp="2025-02-13 20:14:37 +0000 UTC" firstStartedPulling="2025-02-13 20:14:37.857918585 +0000 UTC m=+23.206925090" lastFinishedPulling="2025-02-13 20:14:39.778991277 +0000 UTC m=+25.127997782" observedRunningTime="2025-02-13 20:14:40.853430297 +0000 UTC m=+26.202436802" watchObservedRunningTime="2025-02-13 20:14:40.855231224 +0000 UTC m=+26.204237729" Feb 13 20:14:40.902930 kubelet[3630]: E0213 20:14:40.902894 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.902930 kubelet[3630]: W0213 20:14:40.902920 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.903080 kubelet[3630]: E0213 20:14:40.902940 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.903141 kubelet[3630]: E0213 20:14:40.903121 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.903141 kubelet[3630]: W0213 20:14:40.903137 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.903199 kubelet[3630]: E0213 20:14:40.903147 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.903374 kubelet[3630]: E0213 20:14:40.903351 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.903374 kubelet[3630]: W0213 20:14:40.903370 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.903433 kubelet[3630]: E0213 20:14:40.903379 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.903568 kubelet[3630]: E0213 20:14:40.903547 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.903568 kubelet[3630]: W0213 20:14:40.903561 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.903630 kubelet[3630]: E0213 20:14:40.903570 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.903783 kubelet[3630]: E0213 20:14:40.903765 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.903783 kubelet[3630]: W0213 20:14:40.903779 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.903840 kubelet[3630]: E0213 20:14:40.903789 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.903960 kubelet[3630]: E0213 20:14:40.903943 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.903960 kubelet[3630]: W0213 20:14:40.903957 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.904015 kubelet[3630]: E0213 20:14:40.903968 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.904124 kubelet[3630]: E0213 20:14:40.904109 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.904146 kubelet[3630]: W0213 20:14:40.904122 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.904146 kubelet[3630]: E0213 20:14:40.904131 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.904319 kubelet[3630]: E0213 20:14:40.904302 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.904319 kubelet[3630]: W0213 20:14:40.904315 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.904387 kubelet[3630]: E0213 20:14:40.904327 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.904500 kubelet[3630]: E0213 20:14:40.904482 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.904500 kubelet[3630]: W0213 20:14:40.904496 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.904551 kubelet[3630]: E0213 20:14:40.904505 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.904689 kubelet[3630]: E0213 20:14:40.904673 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.904689 kubelet[3630]: W0213 20:14:40.904686 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.904744 kubelet[3630]: E0213 20:14:40.904695 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.904870 kubelet[3630]: E0213 20:14:40.904854 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.904870 kubelet[3630]: W0213 20:14:40.904867 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.904917 kubelet[3630]: E0213 20:14:40.904876 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.905013 kubelet[3630]: E0213 20:14:40.904999 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.905039 kubelet[3630]: W0213 20:14:40.905012 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.905039 kubelet[3630]: E0213 20:14:40.905021 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.905195 kubelet[3630]: E0213 20:14:40.905150 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.905195 kubelet[3630]: W0213 20:14:40.905162 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.905195 kubelet[3630]: E0213 20:14:40.905170 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.905299 kubelet[3630]: E0213 20:14:40.905285 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.905299 kubelet[3630]: W0213 20:14:40.905296 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.905361 kubelet[3630]: E0213 20:14:40.905303 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.905476 kubelet[3630]: E0213 20:14:40.905464 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.905476 kubelet[3630]: W0213 20:14:40.905475 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.905526 kubelet[3630]: E0213 20:14:40.905484 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.905759 kubelet[3630]: E0213 20:14:40.905744 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.905759 kubelet[3630]: W0213 20:14:40.905757 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.905840 kubelet[3630]: E0213 20:14:40.905766 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.905972 kubelet[3630]: E0213 20:14:40.905959 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.905972 kubelet[3630]: W0213 20:14:40.905970 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.906026 kubelet[3630]: E0213 20:14:40.905986 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.906202 kubelet[3630]: E0213 20:14:40.906185 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.906202 kubelet[3630]: W0213 20:14:40.906198 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.906264 kubelet[3630]: E0213 20:14:40.906211 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.906393 kubelet[3630]: E0213 20:14:40.906379 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.906393 kubelet[3630]: W0213 20:14:40.906391 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.906446 kubelet[3630]: E0213 20:14:40.906409 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.906621 kubelet[3630]: E0213 20:14:40.906606 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.906621 kubelet[3630]: W0213 20:14:40.906619 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.906790 kubelet[3630]: E0213 20:14:40.906773 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.906822 kubelet[3630]: E0213 20:14:40.906817 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.906850 kubelet[3630]: W0213 20:14:40.906826 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.906850 kubelet[3630]: E0213 20:14:40.906840 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.906993 kubelet[3630]: E0213 20:14:40.906979 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.906993 kubelet[3630]: W0213 20:14:40.906992 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.907055 kubelet[3630]: E0213 20:14:40.907009 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.907162 kubelet[3630]: E0213 20:14:40.907149 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.907189 kubelet[3630]: W0213 20:14:40.907162 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.907189 kubelet[3630]: E0213 20:14:40.907173 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.907357 kubelet[3630]: E0213 20:14:40.907342 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.907357 kubelet[3630]: W0213 20:14:40.907354 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.907610 kubelet[3630]: E0213 20:14:40.907423 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.907610 kubelet[3630]: E0213 20:14:40.907476 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.907610 kubelet[3630]: W0213 20:14:40.907482 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.907610 kubelet[3630]: E0213 20:14:40.907590 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.907787 kubelet[3630]: E0213 20:14:40.907753 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.907787 kubelet[3630]: W0213 20:14:40.907763 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.907787 kubelet[3630]: E0213 20:14:40.907778 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.908024 kubelet[3630]: E0213 20:14:40.908007 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.908070 kubelet[3630]: W0213 20:14:40.908026 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.908070 kubelet[3630]: E0213 20:14:40.908038 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.908483 kubelet[3630]: E0213 20:14:40.908453 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.908483 kubelet[3630]: W0213 20:14:40.908475 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.908557 kubelet[3630]: E0213 20:14:40.908489 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.909315 kubelet[3630]: E0213 20:14:40.909291 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.909315 kubelet[3630]: W0213 20:14:40.909309 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.909385 kubelet[3630]: E0213 20:14:40.909322 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.909540 kubelet[3630]: E0213 20:14:40.909518 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.909540 kubelet[3630]: W0213 20:14:40.909534 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.909602 kubelet[3630]: E0213 20:14:40.909544 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.910107 kubelet[3630]: E0213 20:14:40.909994 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.910107 kubelet[3630]: W0213 20:14:40.910012 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.910107 kubelet[3630]: E0213 20:14:40.910027 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.910507 kubelet[3630]: E0213 20:14:40.910425 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.910507 kubelet[3630]: W0213 20:14:40.910440 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.910507 kubelet[3630]: E0213 20:14:40.910451 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:40.910606 kubelet[3630]: E0213 20:14:40.910585 3630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:40.910606 kubelet[3630]: W0213 20:14:40.910593 3630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:40.910606 kubelet[3630]: E0213 20:14:40.910601 3630 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:41.139082 containerd[1857]: time="2025-02-13T20:14:41.138294264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:41.143374 containerd[1857]: time="2025-02-13T20:14:41.142409400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 20:14:41.147266 containerd[1857]: time="2025-02-13T20:14:41.146748497Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:41.159098 containerd[1857]: time="2025-02-13T20:14:41.159059024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:41.159955 containerd[1857]: time="2025-02-13T20:14:41.159915987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.380451429s" Feb 13 20:14:41.160041 containerd[1857]: time="2025-02-13T20:14:41.160028027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 20:14:41.164227 containerd[1857]: time="2025-02-13T20:14:41.164164283Z" level=info msg="CreateContainer within sandbox \"2068a080e6d23ca2ec89c4b9e165fe064c5080ec10ca3e1acc3f0b318e74e704\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:14:41.208274 containerd[1857]: time="2025-02-13T20:14:41.208231771Z" level=info msg="CreateContainer within sandbox \"2068a080e6d23ca2ec89c4b9e165fe064c5080ec10ca3e1acc3f0b318e74e704\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0b6a55000f9bde33d0e773fdffcf5e243136bc2945d4b734a80fbb06f4b311cd\"" Feb 13 20:14:41.210147 containerd[1857]: time="2025-02-13T20:14:41.209984218Z" level=info msg="StartContainer for \"0b6a55000f9bde33d0e773fdffcf5e243136bc2945d4b734a80fbb06f4b311cd\"" Feb 13 20:14:41.263085 containerd[1857]: time="2025-02-13T20:14:41.263007940Z" level=info msg="StartContainer for \"0b6a55000f9bde33d0e773fdffcf5e243136bc2945d4b734a80fbb06f4b311cd\" returns successfully" Feb 13 20:14:42.153490 kubelet[3630]: E0213 20:14:41.751190 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:42.153490 kubelet[3630]: I0213 20:14:41.844627 3630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:14:41.785099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b6a55000f9bde33d0e773fdffcf5e243136bc2945d4b734a80fbb06f4b311cd-rootfs.mount: Deactivated successfully. Feb 13 20:14:42.175579 containerd[1857]: time="2025-02-13T20:14:42.175506103Z" level=info msg="shim disconnected" id=0b6a55000f9bde33d0e773fdffcf5e243136bc2945d4b734a80fbb06f4b311cd namespace=k8s.io Feb 13 20:14:42.175579 containerd[1857]: time="2025-02-13T20:14:42.175560063Z" level=warning msg="cleaning up after shim disconnected" id=0b6a55000f9bde33d0e773fdffcf5e243136bc2945d4b734a80fbb06f4b311cd namespace=k8s.io Feb 13 20:14:42.175579 containerd[1857]: time="2025-02-13T20:14:42.175571423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:14:42.854859 containerd[1857]: time="2025-02-13T20:14:42.853939252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:14:43.751406 kubelet[3630]: E0213 20:14:43.751355 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:45.751010 kubelet[3630]: E0213 20:14:45.750951 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:45.926401 containerd[1857]: time="2025-02-13T20:14:45.923912811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:45.929162 containerd[1857]: time="2025-02-13T20:14:45.929045546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 20:14:45.934824 containerd[1857]: time="2025-02-13T20:14:45.934178082Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:45.946754 containerd[1857]: time="2025-02-13T20:14:45.946633320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:45.948468 containerd[1857]: time="2025-02-13T20:14:45.948367245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.094386273s" Feb 13 20:14:45.948468 containerd[1857]: time="2025-02-13T20:14:45.948410885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 20:14:45.953596 containerd[1857]: time="2025-02-13T20:14:45.953520701Z" level=info msg="CreateContainer within sandbox \"2068a080e6d23ca2ec89c4b9e165fe064c5080ec10ca3e1acc3f0b318e74e704\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:14:46.009125 containerd[1857]: time="2025-02-13T20:14:46.008799429Z" level=info msg="CreateContainer within sandbox \"2068a080e6d23ca2ec89c4b9e165fe064c5080ec10ca3e1acc3f0b318e74e704\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cb7036534cc82971bcf99e5d3c6a0d75b6c22f7552262cc995a4640e787aba54\"" Feb 13 20:14:46.011235 containerd[1857]: time="2025-02-13T20:14:46.009818712Z" level=info msg="StartContainer for \"cb7036534cc82971bcf99e5d3c6a0d75b6c22f7552262cc995a4640e787aba54\"" Feb 13 20:14:46.108234 containerd[1857]: time="2025-02-13T20:14:46.107970290Z" level=info msg="StartContainer for \"cb7036534cc82971bcf99e5d3c6a0d75b6c22f7552262cc995a4640e787aba54\" returns successfully" Feb 13 20:14:47.151958 containerd[1857]: time="2025-02-13T20:14:47.151749062Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:14:47.173050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb7036534cc82971bcf99e5d3c6a0d75b6c22f7552262cc995a4640e787aba54-rootfs.mount: Deactivated successfully. Feb 13 20:14:47.221181 kubelet[3630]: I0213 20:14:47.220981 3630 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:14:47.251072 kubelet[3630]: I0213 20:14:47.251024 3630 topology_manager.go:215] "Topology Admit Handler" podUID="523b7af8-9ff0-42ef-9588-a9c433e599e8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:47.257665 kubelet[3630]: I0213 20:14:47.256569 3630 topology_manager.go:215] "Topology Admit Handler" podUID="ce92c99c-5055-43b1-b9d9-2f6d29334b94" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q8k52" Feb 13 20:14:47.265948 kubelet[3630]: I0213 20:14:47.265886 3630 topology_manager.go:215] "Topology Admit Handler" podUID="7056d74b-e474-45a1-8d67-963b49a257e9" podNamespace="calico-apiserver" podName="calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:47.266073 kubelet[3630]: I0213 20:14:47.266046 3630 topology_manager.go:215] "Topology Admit Handler" podUID="0ebb571e-ba96-4825-a97f-086e3868f081" podNamespace="calico-apiserver" podName="calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:47.266421 kubelet[3630]: I0213 20:14:47.266139 3630 topology_manager.go:215] "Topology Admit Handler" podUID="64b944df-0855-47f3-9fea-a03847895d35" podNamespace="calico-system" podName="calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:47.348700 kubelet[3630]: I0213 20:14:47.348655 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/523b7af8-9ff0-42ef-9588-a9c433e599e8-config-volume\") pod \"coredns-7db6d8ff4d-7fsxn\" (UID: \"523b7af8-9ff0-42ef-9588-a9c433e599e8\") " pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:47.349024 kubelet[3630]: I0213 20:14:47.348973 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwxj\" (UniqueName: \"kubernetes.io/projected/523b7af8-9ff0-42ef-9588-a9c433e599e8-kube-api-access-fjwxj\") pod \"coredns-7db6d8ff4d-7fsxn\" (UID: \"523b7af8-9ff0-42ef-9588-a9c433e599e8\") " pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:47.450384 kubelet[3630]: I0213 20:14:47.449252 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ml8p\" (UniqueName: \"kubernetes.io/projected/ce92c99c-5055-43b1-b9d9-2f6d29334b94-kube-api-access-7ml8p\") pod \"coredns-7db6d8ff4d-q8k52\" (UID: \"ce92c99c-5055-43b1-b9d9-2f6d29334b94\") " pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:47.450384 kubelet[3630]: I0213 20:14:47.449299 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b944df-0855-47f3-9fea-a03847895d35-tigera-ca-bundle\") pod \"calico-kube-controllers-6c647b4f5b-k5qm9\" (UID: \"64b944df-0855-47f3-9fea-a03847895d35\") " pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:47.450384 kubelet[3630]: I0213 20:14:47.449351 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcf7\" (UniqueName: \"kubernetes.io/projected/0ebb571e-ba96-4825-a97f-086e3868f081-kube-api-access-lhcf7\") pod \"calico-apiserver-6dff5cb7c5-snvwz\" (UID: \"0ebb571e-ba96-4825-a97f-086e3868f081\") " pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:47.450384 kubelet[3630]: I0213 20:14:47.449402 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce92c99c-5055-43b1-b9d9-2f6d29334b94-config-volume\") pod \"coredns-7db6d8ff4d-q8k52\" (UID: \"ce92c99c-5055-43b1-b9d9-2f6d29334b94\") " pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:47.450384 kubelet[3630]: I0213 20:14:47.449437 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7056d74b-e474-45a1-8d67-963b49a257e9-calico-apiserver-certs\") pod \"calico-apiserver-6dff5cb7c5-sdjn4\" (UID: \"7056d74b-e474-45a1-8d67-963b49a257e9\") " pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:47.450610 kubelet[3630]: I0213 20:14:47.449462 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85n8m\" (UniqueName: \"kubernetes.io/projected/64b944df-0855-47f3-9fea-a03847895d35-kube-api-access-85n8m\") pod \"calico-kube-controllers-6c647b4f5b-k5qm9\" (UID: \"64b944df-0855-47f3-9fea-a03847895d35\") " pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:47.450610 kubelet[3630]: I0213 20:14:47.449490 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncs45\" (UniqueName: \"kubernetes.io/projected/7056d74b-e474-45a1-8d67-963b49a257e9-kube-api-access-ncs45\") pod \"calico-apiserver-6dff5cb7c5-sdjn4\" (UID: \"7056d74b-e474-45a1-8d67-963b49a257e9\") " pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:47.450610 kubelet[3630]: I0213 20:14:47.449508 3630 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ebb571e-ba96-4825-a97f-086e3868f081-calico-apiserver-certs\") pod \"calico-apiserver-6dff5cb7c5-snvwz\" (UID: \"0ebb571e-ba96-4825-a97f-086e3868f081\") " pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:48.316937 containerd[1857]: time="2025-02-13T20:14:48.316889843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:48.319678 containerd[1857]: time="2025-02-13T20:14:48.316891963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:0,}" Feb 13 20:14:48.356389 containerd[1857]: time="2025-02-13T20:14:48.356326562Z" level=info msg="shim disconnected" id=cb7036534cc82971bcf99e5d3c6a0d75b6c22f7552262cc995a4640e787aba54 namespace=k8s.io Feb 13 20:14:48.356389 containerd[1857]: time="2025-02-13T20:14:48.356385563Z" level=warning msg="cleaning up after shim disconnected" id=cb7036534cc82971bcf99e5d3c6a0d75b6c22f7552262cc995a4640e787aba54 namespace=k8s.io Feb 13 20:14:48.356389 containerd[1857]: time="2025-02-13T20:14:48.356395483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:14:48.471358 containerd[1857]: time="2025-02-13T20:14:48.471310792Z" level=error msg="Failed to destroy network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.472540 containerd[1857]: time="2025-02-13T20:14:48.472488875Z" level=error msg="encountered an error cleaning up failed sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.472618 containerd[1857]: time="2025-02-13T20:14:48.472561516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.472871 kubelet[3630]: E0213 20:14:48.472831 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.473596 kubelet[3630]: E0213 20:14:48.473238 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:48.473596 kubelet[3630]: E0213 20:14:48.473268 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:48.473596 kubelet[3630]: E0213 20:14:48.473323 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7fsxn" podUID="523b7af8-9ff0-42ef-9588-a9c433e599e8" Feb 13 20:14:48.477386 containerd[1857]: time="2025-02-13T20:14:48.477096209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:48.477386 containerd[1857]: time="2025-02-13T20:14:48.477213570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:0,}" Feb 13 20:14:48.479210 containerd[1857]: time="2025-02-13T20:14:48.479162936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:14:48.484627 containerd[1857]: time="2025-02-13T20:14:48.484551632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:14:48.484984 containerd[1857]: time="2025-02-13T20:14:48.484939393Z" level=error msg="Failed to destroy network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.485279 containerd[1857]: time="2025-02-13T20:14:48.485247554Z" level=error msg="encountered an error cleaning up failed sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.485347 containerd[1857]: time="2025-02-13T20:14:48.485322194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.485857 kubelet[3630]: E0213 20:14:48.485500 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.485857 kubelet[3630]: E0213 20:14:48.485551 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:48.485857 kubelet[3630]: E0213 20:14:48.485579 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:48.485984 kubelet[3630]: E0213 20:14:48.485613 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:48.703852 containerd[1857]: time="2025-02-13T20:14:48.703584018Z" level=error msg="Failed to destroy network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.705059 containerd[1857]: time="2025-02-13T20:14:48.704884982Z" level=error msg="encountered an error cleaning up failed sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.705059 containerd[1857]: time="2025-02-13T20:14:48.704963182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.705215 kubelet[3630]: E0213 20:14:48.705186 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.705693 kubelet[3630]: E0213 20:14:48.705240 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:48.705777 kubelet[3630]: E0213 20:14:48.705694 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:48.705777 kubelet[3630]: E0213 20:14:48.705741 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" podUID="64b944df-0855-47f3-9fea-a03847895d35" Feb 13 20:14:48.723812 containerd[1857]: time="2025-02-13T20:14:48.723770319Z" level=error msg="Failed to destroy network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.724373 containerd[1857]: time="2025-02-13T20:14:48.724231441Z" level=error msg="encountered an error cleaning up failed sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.724373 containerd[1857]: time="2025-02-13T20:14:48.724285321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.725458 kubelet[3630]: E0213 20:14:48.724671 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.725458 kubelet[3630]: E0213 20:14:48.724730 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:48.725458 kubelet[3630]: E0213 20:14:48.724748 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:48.725581 kubelet[3630]: E0213 20:14:48.724797 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" podUID="7056d74b-e474-45a1-8d67-963b49a257e9" Feb 13 20:14:48.736264 containerd[1857]: time="2025-02-13T20:14:48.736142717Z" level=error msg="Failed to destroy network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.736685 containerd[1857]: time="2025-02-13T20:14:48.736570998Z" level=error msg="encountered an error cleaning up failed sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.736685 containerd[1857]: time="2025-02-13T20:14:48.736631718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.736995 kubelet[3630]: E0213 20:14:48.736957 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.737088 kubelet[3630]: E0213 20:14:48.737013 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:48.737088 kubelet[3630]: E0213 20:14:48.737033 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:48.737159 kubelet[3630]: E0213 20:14:48.737082 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8k52" podUID="ce92c99c-5055-43b1-b9d9-2f6d29334b94" Feb 13 20:14:48.743869 containerd[1857]: time="2025-02-13T20:14:48.743813620Z" level=error msg="Failed to destroy network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.744149 containerd[1857]: time="2025-02-13T20:14:48.744112741Z" level=error msg="encountered an error cleaning up failed sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.744195 containerd[1857]: time="2025-02-13T20:14:48.744172621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.744427 kubelet[3630]: E0213 20:14:48.744387 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:48.744492 kubelet[3630]: E0213 20:14:48.744443 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:48.744492 kubelet[3630]: E0213 20:14:48.744462 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:48.744548 kubelet[3630]: E0213 20:14:48.744506 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" podUID="0ebb571e-ba96-4825-a97f-086e3868f081" Feb 13 20:14:48.868471 kubelet[3630]: I0213 20:14:48.868081 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252" Feb 13 20:14:48.870033 containerd[1857]: time="2025-02-13T20:14:48.868789200Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:14:48.870033 containerd[1857]: time="2025-02-13T20:14:48.869017801Z" level=info msg="Ensure that sandbox da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252 in task-service has been cleanup successfully" Feb 13 20:14:48.870033 containerd[1857]: time="2025-02-13T20:14:48.869413442Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:14:48.870033 containerd[1857]: time="2025-02-13T20:14:48.869444522Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:14:48.870222 containerd[1857]: time="2025-02-13T20:14:48.870089724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:14:48.871105 containerd[1857]: time="2025-02-13T20:14:48.870554805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:14:48.873219 kubelet[3630]: I0213 20:14:48.873193 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8" Feb 13 20:14:48.877009 containerd[1857]: time="2025-02-13T20:14:48.876669704Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:14:48.877009 containerd[1857]: time="2025-02-13T20:14:48.876850184Z" level=info msg="Ensure that sandbox 9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8 in task-service has been cleanup successfully" Feb 13 20:14:48.877308 containerd[1857]: time="2025-02-13T20:14:48.877288066Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:14:48.877609 containerd[1857]: time="2025-02-13T20:14:48.877576187Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:14:48.878396 kubelet[3630]: I0213 20:14:48.877978 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a" Feb 13 20:14:48.879113 containerd[1857]: time="2025-02-13T20:14:48.879087511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:14:48.879475 containerd[1857]: time="2025-02-13T20:14:48.879184231Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:14:48.879935 containerd[1857]: time="2025-02-13T20:14:48.879913994Z" level=info msg="Ensure that sandbox 50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a in task-service has been cleanup successfully" Feb 13 20:14:48.880220 containerd[1857]: time="2025-02-13T20:14:48.880202035Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:14:48.880350 containerd[1857]: time="2025-02-13T20:14:48.880265915Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:14:48.882298 containerd[1857]: time="2025-02-13T20:14:48.881817839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:1,}" Feb 13 20:14:48.883473 kubelet[3630]: I0213 20:14:48.883444 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88" Feb 13 20:14:48.884601 containerd[1857]: time="2025-02-13T20:14:48.884570528Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:14:48.885206 containerd[1857]: time="2025-02-13T20:14:48.885170930Z" level=info msg="Ensure that sandbox 6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88 in task-service has been cleanup successfully" Feb 13 20:14:48.885540 containerd[1857]: time="2025-02-13T20:14:48.885510691Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:14:48.885540 containerd[1857]: time="2025-02-13T20:14:48.885531971Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:14:48.886944 containerd[1857]: time="2025-02-13T20:14:48.886883655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:1,}" Feb 13 20:14:48.889436 kubelet[3630]: I0213 20:14:48.888017 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181" Feb 13 20:14:48.892166 containerd[1857]: time="2025-02-13T20:14:48.891050787Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:14:48.892166 containerd[1857]: time="2025-02-13T20:14:48.891226588Z" level=info msg="Ensure that sandbox e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181 in task-service has been cleanup successfully" Feb 13 20:14:48.892166 containerd[1857]: time="2025-02-13T20:14:48.891531309Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:14:48.892166 containerd[1857]: time="2025-02-13T20:14:48.891549309Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:14:48.892447 containerd[1857]: time="2025-02-13T20:14:48.892311631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:1,}" Feb 13 20:14:48.893136 kubelet[3630]: I0213 20:14:48.893111 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df" Feb 13 20:14:48.893870 containerd[1857]: time="2025-02-13T20:14:48.893691876Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:14:48.894410 containerd[1857]: time="2025-02-13T20:14:48.894298317Z" level=info msg="Ensure that sandbox 24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df in task-service has been cleanup successfully" Feb 13 20:14:48.897410 containerd[1857]: time="2025-02-13T20:14:48.897173766Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:14:48.897410 containerd[1857]: time="2025-02-13T20:14:48.897204566Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:14:48.898840 containerd[1857]: time="2025-02-13T20:14:48.898257289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:1,}" Feb 13 20:14:49.141188 containerd[1857]: time="2025-02-13T20:14:49.141139307Z" level=error msg="Failed to destroy network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.144287 containerd[1857]: time="2025-02-13T20:14:49.144130397Z" level=error msg="encountered an error cleaning up failed sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.144287 containerd[1857]: time="2025-02-13T20:14:49.144220877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.144929 kubelet[3630]: E0213 20:14:49.144748 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.145111 kubelet[3630]: E0213 20:14:49.144959 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:49.145111 kubelet[3630]: E0213 20:14:49.144983 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:49.145224 kubelet[3630]: E0213 20:14:49.145125 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" podUID="0ebb571e-ba96-4825-a97f-086e3868f081" Feb 13 20:14:49.172806 containerd[1857]: time="2025-02-13T20:14:49.172754204Z" level=error msg="Failed to destroy network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.174342 containerd[1857]: time="2025-02-13T20:14:49.174282288Z" level=error msg="encountered an error cleaning up failed sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.174469 containerd[1857]: time="2025-02-13T20:14:49.174359368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.176369 kubelet[3630]: E0213 20:14:49.176284 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.176512 kubelet[3630]: E0213 20:14:49.176383 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:49.176616 kubelet[3630]: E0213 20:14:49.176510 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:49.176616 kubelet[3630]: E0213 20:14:49.176567 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" podUID="7056d74b-e474-45a1-8d67-963b49a257e9" Feb 13 20:14:49.208004 containerd[1857]: time="2025-02-13T20:14:49.207952471Z" level=error msg="Failed to destroy network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.208283 containerd[1857]: time="2025-02-13T20:14:49.208077511Z" level=error msg="Failed to destroy network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.208732 containerd[1857]: time="2025-02-13T20:14:49.208628153Z" level=error msg="encountered an error cleaning up failed sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.208732 containerd[1857]: time="2025-02-13T20:14:49.208700393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.209077 kubelet[3630]: E0213 20:14:49.208915 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.209077 kubelet[3630]: E0213 20:14:49.208977 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:49.209077 kubelet[3630]: E0213 20:14:49.209000 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:49.210692 kubelet[3630]: E0213 20:14:49.209044 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:49.211625 containerd[1857]: time="2025-02-13T20:14:49.211234080Z" level=error msg="encountered an error cleaning up failed sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.211625 containerd[1857]: time="2025-02-13T20:14:49.211411921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.211757 kubelet[3630]: E0213 20:14:49.211713 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.211795 kubelet[3630]: E0213 20:14:49.211769 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:49.211916 kubelet[3630]: E0213 20:14:49.211786 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:49.211916 kubelet[3630]: E0213 20:14:49.211832 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8k52" podUID="ce92c99c-5055-43b1-b9d9-2f6d29334b94" Feb 13 20:14:49.219177 containerd[1857]: time="2025-02-13T20:14:49.219128264Z" level=error msg="Failed to destroy network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.219507 containerd[1857]: time="2025-02-13T20:14:49.219477386Z" level=error msg="encountered an error cleaning up failed sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.219557 containerd[1857]: time="2025-02-13T20:14:49.219538306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.219818 kubelet[3630]: E0213 20:14:49.219762 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.219899 kubelet[3630]: E0213 20:14:49.219821 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:49.219899 kubelet[3630]: E0213 20:14:49.219845 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:49.219952 kubelet[3630]: E0213 20:14:49.219896 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7fsxn" podUID="523b7af8-9ff0-42ef-9588-a9c433e599e8" Feb 13 20:14:49.222765 containerd[1857]: time="2025-02-13T20:14:49.222721355Z" level=error msg="Failed to destroy network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.223063 containerd[1857]: time="2025-02-13T20:14:49.223034076Z" level=error msg="encountered an error cleaning up failed sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.223110 containerd[1857]: time="2025-02-13T20:14:49.223092957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.223370 kubelet[3630]: E0213 20:14:49.223313 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:49.223528 kubelet[3630]: E0213 20:14:49.223459 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:49.223528 kubelet[3630]: E0213 20:14:49.223486 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:49.223756 kubelet[3630]: E0213 20:14:49.223686 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" podUID="64b944df-0855-47f3-9fea-a03847895d35" Feb 13 20:14:49.321654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df-shm.mount: Deactivated successfully. Feb 13 20:14:49.896031 kubelet[3630]: I0213 20:14:49.895935 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0" Feb 13 20:14:49.898463 containerd[1857]: time="2025-02-13T20:14:49.898000639Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:14:49.898463 containerd[1857]: time="2025-02-13T20:14:49.898172280Z" level=info msg="Ensure that sandbox c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0 in task-service has been cleanup successfully" Feb 13 20:14:49.898463 containerd[1857]: time="2025-02-13T20:14:49.898353480Z" level=info msg="TearDown network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" successfully" Feb 13 20:14:49.898463 containerd[1857]: time="2025-02-13T20:14:49.898369440Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" returns successfully" Feb 13 20:14:49.901692 containerd[1857]: time="2025-02-13T20:14:49.901657693Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:14:49.901779 containerd[1857]: time="2025-02-13T20:14:49.901753174Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:14:49.901779 containerd[1857]: time="2025-02-13T20:14:49.901762574Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:14:49.902091 systemd[1]: run-netns-cni\x2dc28cf021\x2d421b\x2d8e4a\x2d37d7\x2d6711946eb97d.mount: Deactivated successfully. Feb 13 20:14:49.903522 containerd[1857]: time="2025-02-13T20:14:49.903217740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:2,}" Feb 13 20:14:49.903609 kubelet[3630]: I0213 20:14:49.903476 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba" Feb 13 20:14:49.904324 containerd[1857]: time="2025-02-13T20:14:49.904289584Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:14:49.904504 containerd[1857]: time="2025-02-13T20:14:49.904474705Z" level=info msg="Ensure that sandbox c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba in task-service has been cleanup successfully" Feb 13 20:14:49.904927 containerd[1857]: time="2025-02-13T20:14:49.904900266Z" level=info msg="TearDown network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" successfully" Feb 13 20:14:49.904927 containerd[1857]: time="2025-02-13T20:14:49.904923106Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" returns successfully" Feb 13 20:14:49.909009 containerd[1857]: time="2025-02-13T20:14:49.907298076Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:14:49.909009 containerd[1857]: time="2025-02-13T20:14:49.907387996Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:14:49.909009 containerd[1857]: time="2025-02-13T20:14:49.907397116Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:14:49.909009 containerd[1857]: time="2025-02-13T20:14:49.908151839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:2,}" Feb 13 20:14:49.909508 containerd[1857]: time="2025-02-13T20:14:49.909361844Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:14:49.909564 kubelet[3630]: I0213 20:14:49.908577 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83" Feb 13 20:14:49.909894 containerd[1857]: time="2025-02-13T20:14:49.909778605Z" level=info msg="Ensure that sandbox 64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83 in task-service has been cleanup successfully" Feb 13 20:14:49.910007 containerd[1857]: time="2025-02-13T20:14:49.909988486Z" level=info msg="TearDown network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" successfully" Feb 13 20:14:49.913691 containerd[1857]: time="2025-02-13T20:14:49.910054167Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" returns successfully" Feb 13 20:14:49.913691 containerd[1857]: time="2025-02-13T20:14:49.910298367Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:14:49.913691 containerd[1857]: time="2025-02-13T20:14:49.910357568Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:14:49.913691 containerd[1857]: time="2025-02-13T20:14:49.910365888Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:14:49.913691 containerd[1857]: time="2025-02-13T20:14:49.911435772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:2,}" Feb 13 20:14:49.913691 containerd[1857]: time="2025-02-13T20:14:49.912745137Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:14:49.913691 containerd[1857]: time="2025-02-13T20:14:49.912878618Z" level=info msg="Ensure that sandbox b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8 in task-service has been cleanup successfully" Feb 13 20:14:49.913883 kubelet[3630]: I0213 20:14:49.912286 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8" Feb 13 20:14:49.910872 systemd[1]: run-netns-cni\x2dd29ed675\x2dd386\x2ddd37\x2d9862\x2d72ff89671e0d.mount: Deactivated successfully. Feb 13 20:14:49.915233 containerd[1857]: time="2025-02-13T20:14:49.914463144Z" level=info msg="TearDown network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" successfully" Feb 13 20:14:49.915233 containerd[1857]: time="2025-02-13T20:14:49.914488024Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" returns successfully" Feb 13 20:14:49.915233 containerd[1857]: time="2025-02-13T20:14:49.915059306Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:14:49.916183 containerd[1857]: time="2025-02-13T20:14:49.915536868Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:14:49.916183 containerd[1857]: time="2025-02-13T20:14:49.915998110Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:14:49.918577 kubelet[3630]: I0213 20:14:49.918312 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31" Feb 13 20:14:49.919135 containerd[1857]: time="2025-02-13T20:14:49.918907441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:2,}" Feb 13 20:14:49.919720 systemd[1]: run-netns-cni\x2d0a869d53\x2d70d7\x2d7b0e\x2de68c\x2d5cf67db641a1.mount: Deactivated successfully. Feb 13 20:14:49.920280 systemd[1]: run-netns-cni\x2d9f122685\x2d9b69\x2da716\x2dd943\x2d76fbe685ae32.mount: Deactivated successfully. Feb 13 20:14:49.924840 containerd[1857]: time="2025-02-13T20:14:49.924698904Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:14:49.924840 containerd[1857]: time="2025-02-13T20:14:49.925622908Z" level=info msg="Ensure that sandbox 3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31 in task-service has been cleanup successfully" Feb 13 20:14:49.926888 containerd[1857]: time="2025-02-13T20:14:49.926859273Z" level=info msg="TearDown network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" successfully" Feb 13 20:14:49.926984 containerd[1857]: time="2025-02-13T20:14:49.926970793Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" returns successfully" Feb 13 20:14:49.927872 containerd[1857]: time="2025-02-13T20:14:49.927837437Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:14:49.927963 containerd[1857]: time="2025-02-13T20:14:49.927931757Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:14:49.927963 containerd[1857]: time="2025-02-13T20:14:49.927955677Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:14:49.928915 containerd[1857]: time="2025-02-13T20:14:49.928775400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:2,}" Feb 13 20:14:49.929178 kubelet[3630]: I0213 20:14:49.929069 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345" Feb 13 20:14:49.930928 containerd[1857]: time="2025-02-13T20:14:49.930842448Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:14:49.931013 containerd[1857]: time="2025-02-13T20:14:49.930985009Z" level=info msg="Ensure that sandbox efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345 in task-service has been cleanup successfully" Feb 13 20:14:49.931274 containerd[1857]: time="2025-02-13T20:14:49.931244450Z" level=info msg="TearDown network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" successfully" Feb 13 20:14:49.931274 containerd[1857]: time="2025-02-13T20:14:49.931266930Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" returns successfully" Feb 13 20:14:49.932458 containerd[1857]: time="2025-02-13T20:14:49.932333334Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:14:49.932738 containerd[1857]: time="2025-02-13T20:14:49.932498655Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:14:49.932738 containerd[1857]: time="2025-02-13T20:14:49.932509295Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:14:49.933723 containerd[1857]: time="2025-02-13T20:14:49.932969897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:2,}" Feb 13 20:14:50.125079 containerd[1857]: time="2025-02-13T20:14:50.125028854Z" level=error msg="Failed to destroy network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.125535 containerd[1857]: time="2025-02-13T20:14:50.125409496Z" level=error msg="encountered an error cleaning up failed sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.125535 containerd[1857]: time="2025-02-13T20:14:50.125480696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.126153 kubelet[3630]: E0213 20:14:50.125749 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.126153 kubelet[3630]: E0213 20:14:50.125807 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:50.126153 kubelet[3630]: E0213 20:14:50.125829 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:50.126344 kubelet[3630]: E0213 20:14:50.125866 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" podUID="0ebb571e-ba96-4825-a97f-086e3868f081" Feb 13 20:14:50.197841 containerd[1857]: time="2025-02-13T20:14:50.197579180Z" level=error msg="Failed to destroy network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.199109 containerd[1857]: time="2025-02-13T20:14:50.198997866Z" level=error msg="encountered an error cleaning up failed sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.199109 containerd[1857]: time="2025-02-13T20:14:50.199063226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.200049 kubelet[3630]: E0213 20:14:50.199974 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.200310 kubelet[3630]: E0213 20:14:50.200233 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:50.200310 kubelet[3630]: E0213 20:14:50.200260 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:50.201483 kubelet[3630]: E0213 20:14:50.200519 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8k52" podUID="ce92c99c-5055-43b1-b9d9-2f6d29334b94" Feb 13 20:14:50.234984 containerd[1857]: time="2025-02-13T20:14:50.234941727Z" level=error msg="Failed to destroy network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.235486 containerd[1857]: time="2025-02-13T20:14:50.235453049Z" level=error msg="encountered an error cleaning up failed sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.235629 containerd[1857]: time="2025-02-13T20:14:50.235608570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.236184 kubelet[3630]: E0213 20:14:50.235979 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.236184 kubelet[3630]: E0213 20:14:50.236033 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:50.236184 kubelet[3630]: E0213 20:14:50.236051 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:50.236305 kubelet[3630]: E0213 20:14:50.236097 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" podUID="7056d74b-e474-45a1-8d67-963b49a257e9" Feb 13 20:14:50.246826 containerd[1857]: time="2025-02-13T20:14:50.246633453Z" level=error msg="Failed to destroy network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.247180 containerd[1857]: time="2025-02-13T20:14:50.247109295Z" level=error msg="encountered an error cleaning up failed sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.247264 containerd[1857]: time="2025-02-13T20:14:50.247168136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.247648 kubelet[3630]: E0213 20:14:50.247544 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.247648 kubelet[3630]: E0213 20:14:50.247604 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:50.247648 kubelet[3630]: E0213 20:14:50.247625 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:50.247909 kubelet[3630]: E0213 20:14:50.247673 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:50.265743 containerd[1857]: time="2025-02-13T20:14:50.265701849Z" level=error msg="Failed to destroy network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.266355 containerd[1857]: time="2025-02-13T20:14:50.266193811Z" level=error msg="encountered an error cleaning up failed sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.266355 containerd[1857]: time="2025-02-13T20:14:50.266261691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.266734 kubelet[3630]: E0213 20:14:50.266476 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.266734 kubelet[3630]: E0213 20:14:50.266524 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:50.266734 kubelet[3630]: E0213 20:14:50.266550 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:50.266931 kubelet[3630]: E0213 20:14:50.266591 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7fsxn" podUID="523b7af8-9ff0-42ef-9588-a9c433e599e8" Feb 13 20:14:50.268079 containerd[1857]: time="2025-02-13T20:14:50.268050018Z" level=error msg="Failed to destroy network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.269633 containerd[1857]: time="2025-02-13T20:14:50.269596904Z" level=error msg="encountered an error cleaning up failed sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.270307 containerd[1857]: time="2025-02-13T20:14:50.269846945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.270408 kubelet[3630]: E0213 20:14:50.270047 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:50.270408 kubelet[3630]: E0213 20:14:50.270097 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:50.270408 kubelet[3630]: E0213 20:14:50.270114 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:50.270495 kubelet[3630]: E0213 20:14:50.270148 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" podUID="64b944df-0855-47f3-9fea-a03847895d35" Feb 13 20:14:50.331632 systemd[1]: run-netns-cni\x2d3c8310ee\x2dd091\x2d397f\x2da806\x2d7303330d4fc0.mount: Deactivated successfully. Feb 13 20:14:50.332622 systemd[1]: run-netns-cni\x2d2d78df31\x2d2dbb\x2d2a6b\x2d175f\x2d3988b645486f.mount: Deactivated successfully. Feb 13 20:14:50.932260 kubelet[3630]: I0213 20:14:50.932231 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501" Feb 13 20:14:50.936364 containerd[1857]: time="2025-02-13T20:14:50.933049000Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" Feb 13 20:14:50.936364 containerd[1857]: time="2025-02-13T20:14:50.933234160Z" level=info msg="Ensure that sandbox a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501 in task-service has been cleanup successfully" Feb 13 20:14:50.936364 containerd[1857]: time="2025-02-13T20:14:50.933420721Z" level=info msg="TearDown network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" successfully" Feb 13 20:14:50.936364 containerd[1857]: time="2025-02-13T20:14:50.933445601Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" returns successfully" Feb 13 20:14:50.940068 containerd[1857]: time="2025-02-13T20:14:50.938061899Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:14:50.940068 containerd[1857]: time="2025-02-13T20:14:50.938150340Z" level=info msg="TearDown network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" successfully" Feb 13 20:14:50.940068 containerd[1857]: time="2025-02-13T20:14:50.938160460Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" returns successfully" Feb 13 20:14:50.939557 systemd[1]: run-netns-cni\x2df6979c87\x2d844f\x2df5d0\x2db623\x2d61a6311194da.mount: Deactivated successfully. Feb 13 20:14:50.942668 containerd[1857]: time="2025-02-13T20:14:50.941476593Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:14:50.942668 containerd[1857]: time="2025-02-13T20:14:50.941592433Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:14:50.942668 containerd[1857]: time="2025-02-13T20:14:50.941611313Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:14:50.942668 containerd[1857]: time="2025-02-13T20:14:50.942377516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:3,}" Feb 13 20:14:50.943198 kubelet[3630]: I0213 20:14:50.943172 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6" Feb 13 20:14:50.944967 containerd[1857]: time="2025-02-13T20:14:50.944924407Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" Feb 13 20:14:50.947714 containerd[1857]: time="2025-02-13T20:14:50.946232492Z" level=info msg="Ensure that sandbox a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6 in task-service has been cleanup successfully" Feb 13 20:14:50.948232 containerd[1857]: time="2025-02-13T20:14:50.947999299Z" level=info msg="TearDown network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" successfully" Feb 13 20:14:50.948232 containerd[1857]: time="2025-02-13T20:14:50.948023539Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" returns successfully" Feb 13 20:14:50.948313 kubelet[3630]: I0213 20:14:50.948059 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79" Feb 13 20:14:50.948914 containerd[1857]: time="2025-02-13T20:14:50.948893022Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" Feb 13 20:14:50.949368 containerd[1857]: time="2025-02-13T20:14:50.949345744Z" level=info msg="Ensure that sandbox ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79 in task-service has been cleanup successfully" Feb 13 20:14:50.949553 containerd[1857]: time="2025-02-13T20:14:50.949087823Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:14:50.949927 containerd[1857]: time="2025-02-13T20:14:50.949604385Z" level=info msg="TearDown network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" successfully" Feb 13 20:14:50.949927 containerd[1857]: time="2025-02-13T20:14:50.949619065Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" returns successfully" Feb 13 20:14:50.949866 systemd[1]: run-netns-cni\x2da5df08cc\x2d612c\x2da6cf\x2d8e0e\x2dbde4f31e6c83.mount: Deactivated successfully. Feb 13 20:14:50.952598 containerd[1857]: time="2025-02-13T20:14:50.952172035Z" level=info msg="TearDown network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" successfully" Feb 13 20:14:50.952598 containerd[1857]: time="2025-02-13T20:14:50.952205315Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" returns successfully" Feb 13 20:14:50.953742 containerd[1857]: time="2025-02-13T20:14:50.953027198Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:14:50.953742 containerd[1857]: time="2025-02-13T20:14:50.953129439Z" level=info msg="TearDown network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" successfully" Feb 13 20:14:50.953742 containerd[1857]: time="2025-02-13T20:14:50.953138039Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" returns successfully" Feb 13 20:14:50.953742 containerd[1857]: time="2025-02-13T20:14:50.953161399Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:14:50.953742 containerd[1857]: time="2025-02-13T20:14:50.953252039Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:14:50.953742 containerd[1857]: time="2025-02-13T20:14:50.953268159Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:14:50.956159 containerd[1857]: time="2025-02-13T20:14:50.954313964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:3,}" Feb 13 20:14:50.956159 containerd[1857]: time="2025-02-13T20:14:50.954608085Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:14:50.956159 containerd[1857]: time="2025-02-13T20:14:50.954739325Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:14:50.956159 containerd[1857]: time="2025-02-13T20:14:50.954757765Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:14:50.958436 containerd[1857]: time="2025-02-13T20:14:50.958397460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:3,}" Feb 13 20:14:50.958990 kubelet[3630]: I0213 20:14:50.958965 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6" Feb 13 20:14:50.960291 containerd[1857]: time="2025-02-13T20:14:50.960248827Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" Feb 13 20:14:50.960848 systemd[1]: run-netns-cni\x2d433a2f98\x2da7d0\x2d8d4b\x2da86b\x2d4b68801b8009.mount: Deactivated successfully. Feb 13 20:14:50.962537 containerd[1857]: time="2025-02-13T20:14:50.962497916Z" level=info msg="Ensure that sandbox 4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6 in task-service has been cleanup successfully" Feb 13 20:14:50.964714 containerd[1857]: time="2025-02-13T20:14:50.964326083Z" level=info msg="TearDown network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" successfully" Feb 13 20:14:50.964714 containerd[1857]: time="2025-02-13T20:14:50.964355923Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" returns successfully" Feb 13 20:14:50.969168 containerd[1857]: time="2025-02-13T20:14:50.967512976Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:14:50.969168 containerd[1857]: time="2025-02-13T20:14:50.967627336Z" level=info msg="TearDown network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" successfully" Feb 13 20:14:50.969358 kubelet[3630]: I0213 20:14:50.966937 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3" Feb 13 20:14:50.969606 systemd[1]: run-netns-cni\x2dd68b99c4\x2d83eb\x2d10e8\x2d91d9\x2de55035351da6.mount: Deactivated successfully. Feb 13 20:14:50.970861 containerd[1857]: time="2025-02-13T20:14:50.970589468Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" returns successfully" Feb 13 20:14:50.971795 containerd[1857]: time="2025-02-13T20:14:50.971365511Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" Feb 13 20:14:50.971795 containerd[1857]: time="2025-02-13T20:14:50.971558072Z" level=info msg="Ensure that sandbox 55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3 in task-service has been cleanup successfully" Feb 13 20:14:50.972467 containerd[1857]: time="2025-02-13T20:14:50.972436395Z" level=info msg="TearDown network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" successfully" Feb 13 20:14:50.973194 containerd[1857]: time="2025-02-13T20:14:50.973136678Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" returns successfully" Feb 13 20:14:50.973731 containerd[1857]: time="2025-02-13T20:14:50.973531639Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:14:50.973731 containerd[1857]: time="2025-02-13T20:14:50.973624440Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:14:50.973731 containerd[1857]: time="2025-02-13T20:14:50.973633560Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:14:50.974778 containerd[1857]: time="2025-02-13T20:14:50.974750364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:3,}" Feb 13 20:14:50.976220 containerd[1857]: time="2025-02-13T20:14:50.975191846Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:14:50.976699 containerd[1857]: time="2025-02-13T20:14:50.976595011Z" level=info msg="TearDown network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" successfully" Feb 13 20:14:50.976776 containerd[1857]: time="2025-02-13T20:14:50.976693452Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" returns successfully" Feb 13 20:14:50.977301 kubelet[3630]: I0213 20:14:50.976872 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67" Feb 13 20:14:50.978065 containerd[1857]: time="2025-02-13T20:14:50.977904657Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" Feb 13 20:14:50.978132 containerd[1857]: time="2025-02-13T20:14:50.978094577Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:14:50.978603 containerd[1857]: time="2025-02-13T20:14:50.978192978Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:14:50.978603 containerd[1857]: time="2025-02-13T20:14:50.978219858Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:14:50.978603 containerd[1857]: time="2025-02-13T20:14:50.978435419Z" level=info msg="Ensure that sandbox 7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67 in task-service has been cleanup successfully" Feb 13 20:14:50.978927 containerd[1857]: time="2025-02-13T20:14:50.978862340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:3,}" Feb 13 20:14:50.979744 containerd[1857]: time="2025-02-13T20:14:50.979280222Z" level=info msg="TearDown network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" successfully" Feb 13 20:14:50.979744 containerd[1857]: time="2025-02-13T20:14:50.979334862Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" returns successfully" Feb 13 20:14:50.979744 containerd[1857]: time="2025-02-13T20:14:50.979698904Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:14:50.979889 containerd[1857]: time="2025-02-13T20:14:50.979856064Z" level=info msg="TearDown network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" successfully" Feb 13 20:14:50.979889 containerd[1857]: time="2025-02-13T20:14:50.979875424Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" returns successfully" Feb 13 20:14:50.980880 containerd[1857]: time="2025-02-13T20:14:50.980265626Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:14:50.980880 containerd[1857]: time="2025-02-13T20:14:50.980404706Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:14:50.980880 containerd[1857]: time="2025-02-13T20:14:50.980425506Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:14:50.981068 containerd[1857]: time="2025-02-13T20:14:50.981015909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:3,}" Feb 13 20:14:51.321063 systemd[1]: run-netns-cni\x2d0089b524\x2d439a\x2d7fa4\x2d84ca\x2d32b8b5786a8f.mount: Deactivated successfully. Feb 13 20:14:51.321192 systemd[1]: run-netns-cni\x2d66c250c0\x2d8eaa\x2daef8\x2d3677\x2db8eef09bc404.mount: Deactivated successfully. Feb 13 20:14:51.612416 containerd[1857]: time="2025-02-13T20:14:51.612311918Z" level=error msg="Failed to destroy network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.613661 containerd[1857]: time="2025-02-13T20:14:51.612964960Z" level=error msg="encountered an error cleaning up failed sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.615073 containerd[1857]: time="2025-02-13T20:14:51.615038408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.616571 kubelet[3630]: E0213 20:14:51.616529 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.616707 kubelet[3630]: E0213 20:14:51.616588 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:51.616707 kubelet[3630]: E0213 20:14:51.616608 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:51.616707 kubelet[3630]: E0213 20:14:51.616665 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" podUID="64b944df-0855-47f3-9fea-a03847895d35" Feb 13 20:14:51.673239 containerd[1857]: time="2025-02-13T20:14:51.673106637Z" level=error msg="Failed to destroy network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.675097 containerd[1857]: time="2025-02-13T20:14:51.674625683Z" level=error msg="encountered an error cleaning up failed sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.675200 containerd[1857]: time="2025-02-13T20:14:51.675054965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.675707 kubelet[3630]: E0213 20:14:51.675589 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.675707 kubelet[3630]: E0213 20:14:51.675666 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:51.675707 kubelet[3630]: E0213 20:14:51.675693 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:51.675845 kubelet[3630]: E0213 20:14:51.675734 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" podUID="7056d74b-e474-45a1-8d67-963b49a257e9" Feb 13 20:14:51.680427 containerd[1857]: time="2025-02-13T20:14:51.679819944Z" level=error msg="Failed to destroy network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.681455 containerd[1857]: time="2025-02-13T20:14:51.681307830Z" level=error msg="encountered an error cleaning up failed sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.681455 containerd[1857]: time="2025-02-13T20:14:51.681372190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.682629 kubelet[3630]: E0213 20:14:51.681676 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.682629 kubelet[3630]: E0213 20:14:51.681724 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:51.682629 kubelet[3630]: E0213 20:14:51.681747 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:51.682749 kubelet[3630]: E0213 20:14:51.681791 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8k52" podUID="ce92c99c-5055-43b1-b9d9-2f6d29334b94" Feb 13 20:14:51.697827 containerd[1857]: time="2025-02-13T20:14:51.697780255Z" level=error msg="Failed to destroy network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.700553 containerd[1857]: time="2025-02-13T20:14:51.699885903Z" level=error msg="encountered an error cleaning up failed sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.701054 containerd[1857]: time="2025-02-13T20:14:51.700444225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.702148 kubelet[3630]: E0213 20:14:51.701741 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.702148 kubelet[3630]: E0213 20:14:51.701801 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:51.702148 kubelet[3630]: E0213 20:14:51.701834 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:51.702317 kubelet[3630]: E0213 20:14:51.701875 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" podUID="0ebb571e-ba96-4825-a97f-086e3868f081" Feb 13 20:14:51.707679 containerd[1857]: time="2025-02-13T20:14:51.707600853Z" level=error msg="Failed to destroy network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.708697 containerd[1857]: time="2025-02-13T20:14:51.708670018Z" level=error msg="encountered an error cleaning up failed sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.708936 containerd[1857]: time="2025-02-13T20:14:51.708914939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.709653 containerd[1857]: time="2025-02-13T20:14:51.709609461Z" level=error msg="Failed to destroy network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.709728 kubelet[3630]: E0213 20:14:51.709606 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.709873 kubelet[3630]: E0213 20:14:51.709828 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:51.709873 kubelet[3630]: E0213 20:14:51.709852 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:51.710812 kubelet[3630]: E0213 20:14:51.710204 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7fsxn" podUID="523b7af8-9ff0-42ef-9588-a9c433e599e8" Feb 13 20:14:51.711430 containerd[1857]: time="2025-02-13T20:14:51.711304228Z" level=error msg="encountered an error cleaning up failed sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.711430 containerd[1857]: time="2025-02-13T20:14:51.711366988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.711832 kubelet[3630]: E0213 20:14:51.711732 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:51.711832 kubelet[3630]: E0213 20:14:51.711769 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:51.711832 kubelet[3630]: E0213 20:14:51.711816 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:51.712094 kubelet[3630]: E0213 20:14:51.711857 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:51.983870 kubelet[3630]: I0213 20:14:51.983035 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9" Feb 13 20:14:51.986267 containerd[1857]: time="2025-02-13T20:14:51.986182432Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\"" Feb 13 20:14:51.987055 containerd[1857]: time="2025-02-13T20:14:51.986369232Z" level=info msg="Ensure that sandbox 2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9 in task-service has been cleanup successfully" Feb 13 20:14:51.987055 containerd[1857]: time="2025-02-13T20:14:51.987043235Z" level=info msg="TearDown network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" successfully" Feb 13 20:14:51.987208 containerd[1857]: time="2025-02-13T20:14:51.987063835Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" returns successfully" Feb 13 20:14:51.988064 containerd[1857]: time="2025-02-13T20:14:51.987462597Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" Feb 13 20:14:51.988064 containerd[1857]: time="2025-02-13T20:14:51.987541917Z" level=info msg="TearDown network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" successfully" Feb 13 20:14:51.988064 containerd[1857]: time="2025-02-13T20:14:51.987752758Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" returns successfully" Feb 13 20:14:51.989025 containerd[1857]: time="2025-02-13T20:14:51.988992603Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:14:51.989091 containerd[1857]: time="2025-02-13T20:14:51.989065123Z" level=info msg="TearDown network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" successfully" Feb 13 20:14:51.989091 containerd[1857]: time="2025-02-13T20:14:51.989075323Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" returns successfully" Feb 13 20:14:51.989860 containerd[1857]: time="2025-02-13T20:14:51.989792286Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:14:51.990371 containerd[1857]: time="2025-02-13T20:14:51.990105447Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:14:51.990371 containerd[1857]: time="2025-02-13T20:14:51.990124127Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:14:51.990738 kubelet[3630]: I0213 20:14:51.990713 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a" Feb 13 20:14:51.991218 containerd[1857]: time="2025-02-13T20:14:51.991191971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:4,}" Feb 13 20:14:51.993445 containerd[1857]: time="2025-02-13T20:14:51.993410180Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\"" Feb 13 20:14:51.993994 containerd[1857]: time="2025-02-13T20:14:51.993842222Z" level=info msg="Ensure that sandbox b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a in task-service has been cleanup successfully" Feb 13 20:14:51.994928 containerd[1857]: time="2025-02-13T20:14:51.994898546Z" level=info msg="TearDown network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" successfully" Feb 13 20:14:51.994928 containerd[1857]: time="2025-02-13T20:14:51.994923466Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" returns successfully" Feb 13 20:14:51.995674 containerd[1857]: time="2025-02-13T20:14:51.995498588Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" Feb 13 20:14:51.995674 containerd[1857]: time="2025-02-13T20:14:51.995583109Z" level=info msg="TearDown network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" successfully" Feb 13 20:14:51.995674 containerd[1857]: time="2025-02-13T20:14:51.995592309Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" returns successfully" Feb 13 20:14:51.996287 containerd[1857]: time="2025-02-13T20:14:51.996251551Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:14:51.996339 containerd[1857]: time="2025-02-13T20:14:51.996329592Z" level=info msg="TearDown network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" successfully" Feb 13 20:14:51.996363 containerd[1857]: time="2025-02-13T20:14:51.996339272Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" returns successfully" Feb 13 20:14:51.996933 containerd[1857]: time="2025-02-13T20:14:51.996903914Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:14:51.997000 containerd[1857]: time="2025-02-13T20:14:51.996978714Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:14:51.997000 containerd[1857]: time="2025-02-13T20:14:51.996994434Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:14:51.999049 containerd[1857]: time="2025-02-13T20:14:51.998991402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:4,}" Feb 13 20:14:51.999626 kubelet[3630]: I0213 20:14:51.999564 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320" Feb 13 20:14:52.003251 containerd[1857]: time="2025-02-13T20:14:52.002818577Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\"" Feb 13 20:14:52.003251 containerd[1857]: time="2025-02-13T20:14:52.003106498Z" level=info msg="Ensure that sandbox a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320 in task-service has been cleanup successfully" Feb 13 20:14:52.004401 containerd[1857]: time="2025-02-13T20:14:52.004362063Z" level=info msg="TearDown network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" successfully" Feb 13 20:14:52.004518 containerd[1857]: time="2025-02-13T20:14:52.004497504Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" returns successfully" Feb 13 20:14:52.005705 containerd[1857]: time="2025-02-13T20:14:52.005522068Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" Feb 13 20:14:52.005705 containerd[1857]: time="2025-02-13T20:14:52.005607428Z" level=info msg="TearDown network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" successfully" Feb 13 20:14:52.005705 containerd[1857]: time="2025-02-13T20:14:52.005617308Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" returns successfully" Feb 13 20:14:52.006334 containerd[1857]: time="2025-02-13T20:14:52.006187391Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:14:52.006418 containerd[1857]: time="2025-02-13T20:14:52.006364311Z" level=info msg="TearDown network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" successfully" Feb 13 20:14:52.006418 containerd[1857]: time="2025-02-13T20:14:52.006398711Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" returns successfully" Feb 13 20:14:52.007088 containerd[1857]: time="2025-02-13T20:14:52.007058354Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:14:52.007157 containerd[1857]: time="2025-02-13T20:14:52.007136434Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:14:52.007180 containerd[1857]: time="2025-02-13T20:14:52.007156034Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:14:52.007582 kubelet[3630]: I0213 20:14:52.007463 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243" Feb 13 20:14:52.008097 containerd[1857]: time="2025-02-13T20:14:52.008058278Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\"" Feb 13 20:14:52.008255 containerd[1857]: time="2025-02-13T20:14:52.008210039Z" level=info msg="Ensure that sandbox 9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243 in task-service has been cleanup successfully" Feb 13 20:14:52.008756 containerd[1857]: time="2025-02-13T20:14:52.008708561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:4,}" Feb 13 20:14:52.011492 containerd[1857]: time="2025-02-13T20:14:52.011439091Z" level=info msg="TearDown network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" successfully" Feb 13 20:14:52.011492 containerd[1857]: time="2025-02-13T20:14:52.011466611Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" returns successfully" Feb 13 20:14:52.012415 containerd[1857]: time="2025-02-13T20:14:52.012365015Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" Feb 13 20:14:52.012587 containerd[1857]: time="2025-02-13T20:14:52.012447335Z" level=info msg="TearDown network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" successfully" Feb 13 20:14:52.012587 containerd[1857]: time="2025-02-13T20:14:52.012457255Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" returns successfully" Feb 13 20:14:52.013124 containerd[1857]: time="2025-02-13T20:14:52.013095178Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:14:52.013191 containerd[1857]: time="2025-02-13T20:14:52.013174578Z" level=info msg="TearDown network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" successfully" Feb 13 20:14:52.013191 containerd[1857]: time="2025-02-13T20:14:52.013184098Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" returns successfully" Feb 13 20:14:52.013833 containerd[1857]: time="2025-02-13T20:14:52.013669180Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:14:52.014066 containerd[1857]: time="2025-02-13T20:14:52.013955861Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:14:52.014066 containerd[1857]: time="2025-02-13T20:14:52.014009981Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:14:52.015656 containerd[1857]: time="2025-02-13T20:14:52.015475947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:4,}" Feb 13 20:14:52.016277 kubelet[3630]: I0213 20:14:52.016175 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1" Feb 13 20:14:52.018206 containerd[1857]: time="2025-02-13T20:14:52.018103398Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\"" Feb 13 20:14:52.018383 containerd[1857]: time="2025-02-13T20:14:52.018273318Z" level=info msg="Ensure that sandbox 9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1 in task-service has been cleanup successfully" Feb 13 20:14:52.019059 containerd[1857]: time="2025-02-13T20:14:52.018844320Z" level=info msg="TearDown network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" successfully" Feb 13 20:14:52.019059 containerd[1857]: time="2025-02-13T20:14:52.018870681Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" returns successfully" Feb 13 20:14:52.019900 containerd[1857]: time="2025-02-13T20:14:52.019769604Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" Feb 13 20:14:52.020006 containerd[1857]: time="2025-02-13T20:14:52.019875165Z" level=info msg="TearDown network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" successfully" Feb 13 20:14:52.020111 containerd[1857]: time="2025-02-13T20:14:52.020090885Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" returns successfully" Feb 13 20:14:52.020705 containerd[1857]: time="2025-02-13T20:14:52.020615367Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:14:52.020866 containerd[1857]: time="2025-02-13T20:14:52.020762648Z" level=info msg="TearDown network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" successfully" Feb 13 20:14:52.020866 containerd[1857]: time="2025-02-13T20:14:52.020775528Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" returns successfully" Feb 13 20:14:52.022325 containerd[1857]: time="2025-02-13T20:14:52.022154014Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:14:52.024609 containerd[1857]: time="2025-02-13T20:14:52.022367614Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:14:52.024609 containerd[1857]: time="2025-02-13T20:14:52.022381774Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:14:52.024609 containerd[1857]: time="2025-02-13T20:14:52.022994137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:4,}" Feb 13 20:14:52.025077 kubelet[3630]: I0213 20:14:52.024036 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50" Feb 13 20:14:52.025148 containerd[1857]: time="2025-02-13T20:14:52.024867624Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\"" Feb 13 20:14:52.025893 containerd[1857]: time="2025-02-13T20:14:52.025570787Z" level=info msg="Ensure that sandbox 2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50 in task-service has been cleanup successfully" Feb 13 20:14:52.026781 containerd[1857]: time="2025-02-13T20:14:52.026695911Z" level=info msg="TearDown network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" successfully" Feb 13 20:14:52.027080 containerd[1857]: time="2025-02-13T20:14:52.026972153Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" returns successfully" Feb 13 20:14:52.028364 containerd[1857]: time="2025-02-13T20:14:52.028218637Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" Feb 13 20:14:52.028364 containerd[1857]: time="2025-02-13T20:14:52.028329958Z" level=info msg="TearDown network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" successfully" Feb 13 20:14:52.028364 containerd[1857]: time="2025-02-13T20:14:52.028341478Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" returns successfully" Feb 13 20:14:52.029906 containerd[1857]: time="2025-02-13T20:14:52.029121521Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:14:52.029906 containerd[1857]: time="2025-02-13T20:14:52.029196761Z" level=info msg="TearDown network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" successfully" Feb 13 20:14:52.029906 containerd[1857]: time="2025-02-13T20:14:52.029205881Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" returns successfully" Feb 13 20:14:52.029906 containerd[1857]: time="2025-02-13T20:14:52.029615883Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:14:52.029906 containerd[1857]: time="2025-02-13T20:14:52.029694323Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:14:52.029906 containerd[1857]: time="2025-02-13T20:14:52.029776764Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:14:52.030902 containerd[1857]: time="2025-02-13T20:14:52.030812928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:4,}" Feb 13 20:14:52.295349 containerd[1857]: time="2025-02-13T20:14:52.293672004Z" level=error msg="Failed to destroy network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.295349 containerd[1857]: time="2025-02-13T20:14:52.294234126Z" level=error msg="encountered an error cleaning up failed sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.295349 containerd[1857]: time="2025-02-13T20:14:52.294460247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.295586 kubelet[3630]: E0213 20:14:52.295186 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.295586 kubelet[3630]: E0213 20:14:52.295247 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:52.295586 kubelet[3630]: E0213 20:14:52.295268 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:52.295702 kubelet[3630]: E0213 20:14:52.295311 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" podUID="7056d74b-e474-45a1-8d67-963b49a257e9" Feb 13 20:14:52.329320 systemd[1]: run-netns-cni\x2d96c1203d\x2ded59\x2d9241\x2d99c6\x2dbf0ac4733e2b.mount: Deactivated successfully. Feb 13 20:14:52.329706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320-shm.mount: Deactivated successfully. Feb 13 20:14:52.329793 systemd[1]: run-netns-cni\x2d61edb052\x2d2bea\x2db34b\x2da09e\x2ddc260f07adf0.mount: Deactivated successfully. Feb 13 20:14:52.330340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a-shm.mount: Deactivated successfully. Feb 13 20:14:52.356289 containerd[1857]: time="2025-02-13T20:14:52.356131450Z" level=error msg="Failed to destroy network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.360322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d-shm.mount: Deactivated successfully. Feb 13 20:14:52.364971 containerd[1857]: time="2025-02-13T20:14:52.364851125Z" level=error msg="encountered an error cleaning up failed sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.364971 containerd[1857]: time="2025-02-13T20:14:52.364932005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.365260 kubelet[3630]: E0213 20:14:52.365163 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.365260 kubelet[3630]: E0213 20:14:52.365221 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:52.365260 kubelet[3630]: E0213 20:14:52.365242 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:52.365372 kubelet[3630]: E0213 20:14:52.365277 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" podUID="64b944df-0855-47f3-9fea-a03847895d35" Feb 13 20:14:52.394259 containerd[1857]: time="2025-02-13T20:14:52.394110600Z" level=error msg="Failed to destroy network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.395616 containerd[1857]: time="2025-02-13T20:14:52.394933803Z" level=error msg="encountered an error cleaning up failed sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.398139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0-shm.mount: Deactivated successfully. Feb 13 20:14:52.400308 containerd[1857]: time="2025-02-13T20:14:52.400122224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.400386 kubelet[3630]: E0213 20:14:52.400343 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.400424 kubelet[3630]: E0213 20:14:52.400399 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:52.400424 kubelet[3630]: E0213 20:14:52.400419 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:52.400478 kubelet[3630]: E0213 20:14:52.400455 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7fsxn" podUID="523b7af8-9ff0-42ef-9588-a9c433e599e8" Feb 13 20:14:52.414682 containerd[1857]: time="2025-02-13T20:14:52.414413600Z" level=error msg="Failed to destroy network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.418964 containerd[1857]: time="2025-02-13T20:14:52.418837577Z" level=error msg="encountered an error cleaning up failed sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.419075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8-shm.mount: Deactivated successfully. Feb 13 20:14:52.419255 containerd[1857]: time="2025-02-13T20:14:52.419124059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.419958 kubelet[3630]: E0213 20:14:52.419701 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.419958 kubelet[3630]: E0213 20:14:52.419904 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:52.419958 kubelet[3630]: E0213 20:14:52.419930 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:52.420563 kubelet[3630]: E0213 20:14:52.420129 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8k52" podUID="ce92c99c-5055-43b1-b9d9-2f6d29334b94" Feb 13 20:14:52.424230 containerd[1857]: time="2025-02-13T20:14:52.424128478Z" level=error msg="Failed to destroy network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.425956 containerd[1857]: time="2025-02-13T20:14:52.425910285Z" level=error msg="encountered an error cleaning up failed sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.426023 containerd[1857]: time="2025-02-13T20:14:52.425978806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.426554 kubelet[3630]: E0213 20:14:52.426350 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.426554 kubelet[3630]: E0213 20:14:52.426405 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:52.426554 kubelet[3630]: E0213 20:14:52.426423 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:52.426662 kubelet[3630]: E0213 20:14:52.426461 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" podUID="0ebb571e-ba96-4825-a97f-086e3868f081" Feb 13 20:14:52.428779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6-shm.mount: Deactivated successfully. Feb 13 20:14:52.434928 containerd[1857]: time="2025-02-13T20:14:52.434871521Z" level=error msg="Failed to destroy network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.435526 containerd[1857]: time="2025-02-13T20:14:52.435489563Z" level=error msg="encountered an error cleaning up failed sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.435579 containerd[1857]: time="2025-02-13T20:14:52.435556963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.436401 kubelet[3630]: E0213 20:14:52.435979 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:52.436915 kubelet[3630]: E0213 20:14:52.436754 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:52.436915 kubelet[3630]: E0213 20:14:52.436804 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:52.436915 kubelet[3630]: E0213 20:14:52.436846 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:53.030966 kubelet[3630]: I0213 20:14:53.030838 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014" Feb 13 20:14:53.032369 containerd[1857]: time="2025-02-13T20:14:53.032082291Z" level=info msg="StopPodSandbox for \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\"" Feb 13 20:14:53.032369 containerd[1857]: time="2025-02-13T20:14:53.032249011Z" level=info msg="Ensure that sandbox 1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014 in task-service has been cleanup successfully" Feb 13 20:14:53.034930 containerd[1857]: time="2025-02-13T20:14:53.034894127Z" level=info msg="TearDown network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" successfully" Feb 13 20:14:53.035061 containerd[1857]: time="2025-02-13T20:14:53.035045767Z" level=info msg="StopPodSandbox for \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" returns successfully" Feb 13 20:14:53.035766 containerd[1857]: time="2025-02-13T20:14:53.035729366Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\"" Feb 13 20:14:53.036269 containerd[1857]: time="2025-02-13T20:14:53.036247886Z" level=info msg="TearDown network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" successfully" Feb 13 20:14:53.036711 containerd[1857]: time="2025-02-13T20:14:53.036688125Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" returns successfully" Feb 13 20:14:53.037348 kubelet[3630]: I0213 20:14:53.037189 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0" Feb 13 20:14:53.037865 containerd[1857]: time="2025-02-13T20:14:53.037841484Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" Feb 13 20:14:53.038088 containerd[1857]: time="2025-02-13T20:14:53.038048004Z" level=info msg="TearDown network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" successfully" Feb 13 20:14:53.038088 containerd[1857]: time="2025-02-13T20:14:53.038064924Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" returns successfully" Feb 13 20:14:53.039182 containerd[1857]: time="2025-02-13T20:14:53.038868483Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:14:53.039182 containerd[1857]: time="2025-02-13T20:14:53.038948562Z" level=info msg="TearDown network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" successfully" Feb 13 20:14:53.039182 containerd[1857]: time="2025-02-13T20:14:53.038957962Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" returns successfully" Feb 13 20:14:53.039692 containerd[1857]: time="2025-02-13T20:14:53.039633322Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:14:53.039910 containerd[1857]: time="2025-02-13T20:14:53.039892481Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:14:53.040095 containerd[1857]: time="2025-02-13T20:14:53.040078121Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:14:53.040493 containerd[1857]: time="2025-02-13T20:14:53.040019761Z" level=info msg="StopPodSandbox for \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\"" Feb 13 20:14:53.040763 containerd[1857]: time="2025-02-13T20:14:53.040740440Z" level=info msg="Ensure that sandbox 19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0 in task-service has been cleanup successfully" Feb 13 20:14:53.041722 containerd[1857]: time="2025-02-13T20:14:53.041685399Z" level=info msg="TearDown network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" successfully" Feb 13 20:14:53.042520 containerd[1857]: time="2025-02-13T20:14:53.042311118Z" level=info msg="StopPodSandbox for \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" returns successfully" Feb 13 20:14:53.043912 containerd[1857]: time="2025-02-13T20:14:53.041886519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:5,}" Feb 13 20:14:53.044662 containerd[1857]: time="2025-02-13T20:14:53.044620836Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\"" Feb 13 20:14:53.045193 containerd[1857]: time="2025-02-13T20:14:53.045083675Z" level=info msg="TearDown network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" successfully" Feb 13 20:14:53.045193 containerd[1857]: time="2025-02-13T20:14:53.045112475Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" returns successfully" Feb 13 20:14:53.046195 kubelet[3630]: I0213 20:14:53.046001 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849" Feb 13 20:14:53.046538 containerd[1857]: time="2025-02-13T20:14:53.046516473Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" Feb 13 20:14:53.046722 containerd[1857]: time="2025-02-13T20:14:53.046707233Z" level=info msg="TearDown network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" successfully" Feb 13 20:14:53.046931 containerd[1857]: time="2025-02-13T20:14:53.046795953Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" returns successfully" Feb 13 20:14:53.047393 containerd[1857]: time="2025-02-13T20:14:53.047367032Z" level=info msg="StopPodSandbox for \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\"" Feb 13 20:14:53.048300 containerd[1857]: time="2025-02-13T20:14:53.047559032Z" level=info msg="Ensure that sandbox 9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849 in task-service has been cleanup successfully" Feb 13 20:14:53.048300 containerd[1857]: time="2025-02-13T20:14:53.048056831Z" level=info msg="TearDown network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" successfully" Feb 13 20:14:53.048300 containerd[1857]: time="2025-02-13T20:14:53.048074391Z" level=info msg="StopPodSandbox for \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" returns successfully" Feb 13 20:14:53.049623 containerd[1857]: time="2025-02-13T20:14:53.048587911Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\"" Feb 13 20:14:53.049623 containerd[1857]: time="2025-02-13T20:14:53.048667631Z" level=info msg="TearDown network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" successfully" Feb 13 20:14:53.049623 containerd[1857]: time="2025-02-13T20:14:53.048677991Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" returns successfully" Feb 13 20:14:53.049623 containerd[1857]: time="2025-02-13T20:14:53.048688991Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:14:53.049623 containerd[1857]: time="2025-02-13T20:14:53.048775791Z" level=info msg="TearDown network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" successfully" Feb 13 20:14:53.049623 containerd[1857]: time="2025-02-13T20:14:53.049098190Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" returns successfully" Feb 13 20:14:53.050174 containerd[1857]: time="2025-02-13T20:14:53.050143469Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:14:53.050269 containerd[1857]: time="2025-02-13T20:14:53.050227509Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:14:53.050269 containerd[1857]: time="2025-02-13T20:14:53.050238589Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:14:53.050322 containerd[1857]: time="2025-02-13T20:14:53.050292509Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" Feb 13 20:14:53.050361 containerd[1857]: time="2025-02-13T20:14:53.050342189Z" level=info msg="TearDown network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" successfully" Feb 13 20:14:53.050361 containerd[1857]: time="2025-02-13T20:14:53.050358149Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" returns successfully" Feb 13 20:14:53.050855 containerd[1857]: time="2025-02-13T20:14:53.050833108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:5,}" Feb 13 20:14:53.051753 containerd[1857]: time="2025-02-13T20:14:53.050910548Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:14:53.052676 containerd[1857]: time="2025-02-13T20:14:53.052623226Z" level=info msg="TearDown network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" successfully" Feb 13 20:14:53.052676 containerd[1857]: time="2025-02-13T20:14:53.052671106Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" returns successfully" Feb 13 20:14:53.053600 containerd[1857]: time="2025-02-13T20:14:53.053349545Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:14:53.053600 containerd[1857]: time="2025-02-13T20:14:53.053430585Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:14:53.053600 containerd[1857]: time="2025-02-13T20:14:53.053439665Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:14:53.054451 kubelet[3630]: I0213 20:14:53.054373 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d" Feb 13 20:14:53.054727 containerd[1857]: time="2025-02-13T20:14:53.054658663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:5,}" Feb 13 20:14:53.056003 containerd[1857]: time="2025-02-13T20:14:53.055957262Z" level=info msg="StopPodSandbox for \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\"" Feb 13 20:14:53.056566 containerd[1857]: time="2025-02-13T20:14:53.056149222Z" level=info msg="Ensure that sandbox 14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d in task-service has been cleanup successfully" Feb 13 20:14:53.056566 containerd[1857]: time="2025-02-13T20:14:53.056322301Z" level=info msg="TearDown network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" successfully" Feb 13 20:14:53.056566 containerd[1857]: time="2025-02-13T20:14:53.056336901Z" level=info msg="StopPodSandbox for \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" returns successfully" Feb 13 20:14:53.057414 containerd[1857]: time="2025-02-13T20:14:53.057034861Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\"" Feb 13 20:14:53.057414 containerd[1857]: time="2025-02-13T20:14:53.057130780Z" level=info msg="TearDown network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" successfully" Feb 13 20:14:53.057414 containerd[1857]: time="2025-02-13T20:14:53.057142420Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" returns successfully" Feb 13 20:14:53.057414 containerd[1857]: time="2025-02-13T20:14:53.057398100Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" Feb 13 20:14:53.057497 containerd[1857]: time="2025-02-13T20:14:53.057467980Z" level=info msg="TearDown network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" successfully" Feb 13 20:14:53.057497 containerd[1857]: time="2025-02-13T20:14:53.057477180Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" returns successfully" Feb 13 20:14:53.058290 containerd[1857]: time="2025-02-13T20:14:53.058226939Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:14:53.058360 containerd[1857]: time="2025-02-13T20:14:53.058342779Z" level=info msg="TearDown network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" successfully" Feb 13 20:14:53.058400 containerd[1857]: time="2025-02-13T20:14:53.058362179Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" returns successfully" Feb 13 20:14:53.059708 containerd[1857]: time="2025-02-13T20:14:53.059276098Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:14:53.059708 containerd[1857]: time="2025-02-13T20:14:53.059360538Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:14:53.059708 containerd[1857]: time="2025-02-13T20:14:53.059369418Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:14:53.060065 containerd[1857]: time="2025-02-13T20:14:53.060020657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:5,}" Feb 13 20:14:53.061746 kubelet[3630]: I0213 20:14:53.061287 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8" Feb 13 20:14:53.062319 containerd[1857]: time="2025-02-13T20:14:53.062282654Z" level=info msg="StopPodSandbox for \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\"" Feb 13 20:14:53.062464 containerd[1857]: time="2025-02-13T20:14:53.062440774Z" level=info msg="Ensure that sandbox 87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8 in task-service has been cleanup successfully" Feb 13 20:14:53.062688 containerd[1857]: time="2025-02-13T20:14:53.062661774Z" level=info msg="TearDown network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" successfully" Feb 13 20:14:53.062688 containerd[1857]: time="2025-02-13T20:14:53.062684054Z" level=info msg="StopPodSandbox for \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" returns successfully" Feb 13 20:14:53.063436 containerd[1857]: time="2025-02-13T20:14:53.063402253Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\"" Feb 13 20:14:53.063513 containerd[1857]: time="2025-02-13T20:14:53.063488933Z" level=info msg="TearDown network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" successfully" Feb 13 20:14:53.063513 containerd[1857]: time="2025-02-13T20:14:53.063499253Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" returns successfully" Feb 13 20:14:53.064469 containerd[1857]: time="2025-02-13T20:14:53.064333252Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" Feb 13 20:14:53.064469 containerd[1857]: time="2025-02-13T20:14:53.064422812Z" level=info msg="TearDown network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" successfully" Feb 13 20:14:53.064469 containerd[1857]: time="2025-02-13T20:14:53.064444812Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" returns successfully" Feb 13 20:14:53.065148 containerd[1857]: time="2025-02-13T20:14:53.065092731Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:14:53.065224 containerd[1857]: time="2025-02-13T20:14:53.065189531Z" level=info msg="TearDown network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" successfully" Feb 13 20:14:53.065224 containerd[1857]: time="2025-02-13T20:14:53.065202331Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" returns successfully" Feb 13 20:14:53.065959 containerd[1857]: time="2025-02-13T20:14:53.065926090Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:14:53.066066 containerd[1857]: time="2025-02-13T20:14:53.066001250Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:14:53.066066 containerd[1857]: time="2025-02-13T20:14:53.066011610Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:14:53.066887 containerd[1857]: time="2025-02-13T20:14:53.066853569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:5,}" Feb 13 20:14:53.068599 kubelet[3630]: I0213 20:14:53.068360 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6" Feb 13 20:14:53.071867 containerd[1857]: time="2025-02-13T20:14:53.071830843Z" level=info msg="StopPodSandbox for \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\"" Feb 13 20:14:53.072019 containerd[1857]: time="2025-02-13T20:14:53.071993962Z" level=info msg="Ensure that sandbox b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6 in task-service has been cleanup successfully" Feb 13 20:14:53.072576 containerd[1857]: time="2025-02-13T20:14:53.072484962Z" level=info msg="TearDown network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" successfully" Feb 13 20:14:53.072576 containerd[1857]: time="2025-02-13T20:14:53.072509762Z" level=info msg="StopPodSandbox for \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" returns successfully" Feb 13 20:14:53.073437 containerd[1857]: time="2025-02-13T20:14:53.072944521Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\"" Feb 13 20:14:53.073437 containerd[1857]: time="2025-02-13T20:14:53.073032121Z" level=info msg="TearDown network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" successfully" Feb 13 20:14:53.073437 containerd[1857]: time="2025-02-13T20:14:53.073042921Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" returns successfully" Feb 13 20:14:53.074791 containerd[1857]: time="2025-02-13T20:14:53.074289280Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" Feb 13 20:14:53.074791 containerd[1857]: time="2025-02-13T20:14:53.074415840Z" level=info msg="TearDown network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" successfully" Feb 13 20:14:53.074791 containerd[1857]: time="2025-02-13T20:14:53.074426600Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" returns successfully" Feb 13 20:14:53.075340 containerd[1857]: time="2025-02-13T20:14:53.074965199Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:14:53.075340 containerd[1857]: time="2025-02-13T20:14:53.075060639Z" level=info msg="TearDown network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" successfully" Feb 13 20:14:53.075340 containerd[1857]: time="2025-02-13T20:14:53.075107639Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" returns successfully" Feb 13 20:14:53.076163 containerd[1857]: time="2025-02-13T20:14:53.076102078Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:14:53.077050 containerd[1857]: time="2025-02-13T20:14:53.076235997Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:14:53.077050 containerd[1857]: time="2025-02-13T20:14:53.076265757Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:14:53.078034 containerd[1857]: time="2025-02-13T20:14:53.077956875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:5,}" Feb 13 20:14:53.322025 systemd[1]: run-netns-cni\x2d43af6b7f\x2dcbb8\x2d9ba5\x2d5005\x2d77303606d2c1.mount: Deactivated successfully. Feb 13 20:14:53.322673 systemd[1]: run-netns-cni\x2db34190ff\x2d2fab\x2d7c36\x2d3067\x2ddd5205109f5d.mount: Deactivated successfully. Feb 13 20:14:53.322770 systemd[1]: run-netns-cni\x2dc6f70445\x2dd04b\x2d9b3c\x2d56a2\x2db8520e906c94.mount: Deactivated successfully. Feb 13 20:14:53.322842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014-shm.mount: Deactivated successfully. Feb 13 20:14:53.322924 systemd[1]: run-netns-cni\x2d363aa446\x2d3524\x2d5786\x2d49dd\x2df766c49219ad.mount: Deactivated successfully. Feb 13 20:14:53.322994 systemd[1]: run-netns-cni\x2d615e9cd5\x2d0845\x2d3e64\x2dc339\x2d4004871309b4.mount: Deactivated successfully. Feb 13 20:14:53.323061 systemd[1]: run-netns-cni\x2d05d4df14\x2d491e\x2dc5aa\x2df6d7\x2d115bac668d5a.mount: Deactivated successfully. Feb 13 20:14:54.109079 containerd[1857]: time="2025-02-13T20:14:54.109031508Z" level=error msg="Failed to destroy network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.111365 containerd[1857]: time="2025-02-13T20:14:54.111233585Z" level=error msg="encountered an error cleaning up failed sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.111976 containerd[1857]: time="2025-02-13T20:14:54.111931664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.114465 kubelet[3630]: E0213 20:14:54.112762 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.114465 kubelet[3630]: E0213 20:14:54.112818 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:54.114465 kubelet[3630]: E0213 20:14:54.112839 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" Feb 13 20:14:54.115045 kubelet[3630]: E0213 20:14:54.112874 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-sdjn4_calico-apiserver(7056d74b-e474-45a1-8d67-963b49a257e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" podUID="7056d74b-e474-45a1-8d67-963b49a257e9" Feb 13 20:14:54.161988 containerd[1857]: time="2025-02-13T20:14:54.161936364Z" level=error msg="Failed to destroy network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.162902 containerd[1857]: time="2025-02-13T20:14:54.162872002Z" level=error msg="encountered an error cleaning up failed sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.163048 containerd[1857]: time="2025-02-13T20:14:54.163023642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.163777 kubelet[3630]: E0213 20:14:54.163591 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.163777 kubelet[3630]: E0213 20:14:54.163711 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:54.163777 kubelet[3630]: E0213 20:14:54.163733 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvfcz" Feb 13 20:14:54.163971 kubelet[3630]: E0213 20:14:54.163929 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lvfcz_calico-system(7a0204f2-4591-46c1-b244-ab8ca59a1ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvfcz" podUID="7a0204f2-4591-46c1-b244-ab8ca59a1ddb" Feb 13 20:14:54.169548 containerd[1857]: time="2025-02-13T20:14:54.169506914Z" level=error msg="Failed to destroy network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.169918 containerd[1857]: time="2025-02-13T20:14:54.169834434Z" level=error msg="encountered an error cleaning up failed sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.169918 containerd[1857]: time="2025-02-13T20:14:54.169896314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.170130 kubelet[3630]: E0213 20:14:54.170093 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.170264 kubelet[3630]: E0213 20:14:54.170146 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:54.170264 kubelet[3630]: E0213 20:14:54.170172 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7fsxn" Feb 13 20:14:54.170264 kubelet[3630]: E0213 20:14:54.170213 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7fsxn_kube-system(523b7af8-9ff0-42ef-9588-a9c433e599e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7fsxn" podUID="523b7af8-9ff0-42ef-9588-a9c433e599e8" Feb 13 20:14:54.175887 containerd[1857]: time="2025-02-13T20:14:54.175170748Z" level=error msg="Failed to destroy network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.177721 containerd[1857]: time="2025-02-13T20:14:54.176631986Z" level=error msg="encountered an error cleaning up failed sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.177721 containerd[1857]: time="2025-02-13T20:14:54.176710866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.177853 kubelet[3630]: E0213 20:14:54.176874 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.177853 kubelet[3630]: E0213 20:14:54.176917 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:54.177853 kubelet[3630]: E0213 20:14:54.176939 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" Feb 13 20:14:54.177937 kubelet[3630]: E0213 20:14:54.176975 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c647b4f5b-k5qm9_calico-system(64b944df-0855-47f3-9fea-a03847895d35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" podUID="64b944df-0855-47f3-9fea-a03847895d35" Feb 13 20:14:54.181048 containerd[1857]: time="2025-02-13T20:14:54.180789701Z" level=error msg="Failed to destroy network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.181602 containerd[1857]: time="2025-02-13T20:14:54.181575060Z" level=error msg="encountered an error cleaning up failed sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.182293 containerd[1857]: time="2025-02-13T20:14:54.182009539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.182365 kubelet[3630]: E0213 20:14:54.182168 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.182365 kubelet[3630]: E0213 20:14:54.182214 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:54.182365 kubelet[3630]: E0213 20:14:54.182244 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8k52" Feb 13 20:14:54.182687 kubelet[3630]: E0213 20:14:54.182530 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8k52_kube-system(ce92c99c-5055-43b1-b9d9-2f6d29334b94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8k52" podUID="ce92c99c-5055-43b1-b9d9-2f6d29334b94" Feb 13 20:14:54.197605 containerd[1857]: time="2025-02-13T20:14:54.197335441Z" level=error msg="Failed to destroy network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.197766 containerd[1857]: time="2025-02-13T20:14:54.197674400Z" level=error msg="encountered an error cleaning up failed sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.197766 containerd[1857]: time="2025-02-13T20:14:54.197736640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.198671 kubelet[3630]: E0213 20:14:54.198570 3630 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:14:54.198671 kubelet[3630]: E0213 20:14:54.198634 3630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:54.198851 kubelet[3630]: E0213 20:14:54.198677 3630 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" Feb 13 20:14:54.198851 kubelet[3630]: E0213 20:14:54.198736 3630 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff5cb7c5-snvwz_calico-apiserver(0ebb571e-ba96-4825-a97f-086e3868f081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" podUID="0ebb571e-ba96-4825-a97f-086e3868f081" Feb 13 20:14:54.322008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8-shm.mount: Deactivated successfully. Feb 13 20:14:54.322148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99-shm.mount: Deactivated successfully. Feb 13 20:14:54.322225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593571743.mount: Deactivated successfully. Feb 13 20:14:54.480834 containerd[1857]: time="2025-02-13T20:14:54.480701360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 20:14:54.490446 containerd[1857]: time="2025-02-13T20:14:54.490301677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.619710471s" Feb 13 20:14:54.490446 containerd[1857]: time="2025-02-13T20:14:54.490342517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 20:14:54.497675 containerd[1857]: time="2025-02-13T20:14:54.496818101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:54.497675 containerd[1857]: time="2025-02-13T20:14:54.497468904Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:54.500344 containerd[1857]: time="2025-02-13T20:14:54.500292594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:54.508579 containerd[1857]: time="2025-02-13T20:14:54.508547066Z" level=info msg="CreateContainer within sandbox \"2068a080e6d23ca2ec89c4b9e165fe064c5080ec10ca3e1acc3f0b318e74e704\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:14:54.566683 containerd[1857]: time="2025-02-13T20:14:54.566603845Z" level=info msg="CreateContainer within sandbox \"2068a080e6d23ca2ec89c4b9e165fe064c5080ec10ca3e1acc3f0b318e74e704\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a6832c0dc5bae55f87f34306774a96053c37034792bf053ba77db3700e0bce38\"" Feb 13 20:14:54.567654 containerd[1857]: time="2025-02-13T20:14:54.567411928Z" level=info msg="StartContainer for \"a6832c0dc5bae55f87f34306774a96053c37034792bf053ba77db3700e0bce38\"" Feb 13 20:14:54.629235 containerd[1857]: time="2025-02-13T20:14:54.629183602Z" level=info msg="StartContainer for \"a6832c0dc5bae55f87f34306774a96053c37034792bf053ba77db3700e0bce38\" returns successfully" Feb 13 20:14:54.730920 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:14:54.731031 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:14:55.075920 kubelet[3630]: I0213 20:14:55.075894 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510" Feb 13 20:14:55.078107 containerd[1857]: time="2025-02-13T20:14:55.077634778Z" level=info msg="StopPodSandbox for \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\"" Feb 13 20:14:55.078107 containerd[1857]: time="2025-02-13T20:14:55.077943739Z" level=info msg="Ensure that sandbox 3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510 in task-service has been cleanup successfully" Feb 13 20:14:55.078872 containerd[1857]: time="2025-02-13T20:14:55.078294940Z" level=info msg="TearDown network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\" successfully" Feb 13 20:14:55.078872 containerd[1857]: time="2025-02-13T20:14:55.078312980Z" level=info msg="StopPodSandbox for \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\" returns successfully" Feb 13 20:14:55.082774 containerd[1857]: time="2025-02-13T20:14:55.082574157Z" level=info msg="StopPodSandbox for \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\"" Feb 13 20:14:55.082774 containerd[1857]: time="2025-02-13T20:14:55.082704317Z" level=info msg="TearDown network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" successfully" Feb 13 20:14:55.082774 containerd[1857]: time="2025-02-13T20:14:55.082718557Z" level=info msg="StopPodSandbox for \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" returns successfully" Feb 13 20:14:55.082888 kubelet[3630]: I0213 20:14:55.082617 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99" Feb 13 20:14:55.083592 containerd[1857]: time="2025-02-13T20:14:55.082969158Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\"" Feb 13 20:14:55.083592 containerd[1857]: time="2025-02-13T20:14:55.083057518Z" level=info msg="TearDown network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" successfully" Feb 13 20:14:55.083592 containerd[1857]: time="2025-02-13T20:14:55.083067918Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" returns successfully" Feb 13 20:14:55.083906 containerd[1857]: time="2025-02-13T20:14:55.083575560Z" level=info msg="StopPodSandbox for \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\"" Feb 13 20:14:55.083906 containerd[1857]: time="2025-02-13T20:14:55.083865801Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" Feb 13 20:14:55.084241 containerd[1857]: time="2025-02-13T20:14:55.084139083Z" level=info msg="Ensure that sandbox a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99 in task-service has been cleanup successfully" Feb 13 20:14:55.084241 containerd[1857]: time="2025-02-13T20:14:55.084221843Z" level=info msg="TearDown network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" successfully" Feb 13 20:14:55.084241 containerd[1857]: time="2025-02-13T20:14:55.084237123Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" returns successfully" Feb 13 20:14:55.084599 containerd[1857]: time="2025-02-13T20:14:55.084567444Z" level=info msg="TearDown network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\" successfully" Feb 13 20:14:55.085051 containerd[1857]: time="2025-02-13T20:14:55.084901725Z" level=info msg="StopPodSandbox for \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\" returns successfully" Feb 13 20:14:55.085608 containerd[1857]: time="2025-02-13T20:14:55.085224207Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:14:55.085608 containerd[1857]: time="2025-02-13T20:14:55.085315487Z" level=info msg="TearDown network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" successfully" Feb 13 20:14:55.085608 containerd[1857]: time="2025-02-13T20:14:55.085325167Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" returns successfully" Feb 13 20:14:55.086152 containerd[1857]: time="2025-02-13T20:14:55.085957729Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:14:55.086152 containerd[1857]: time="2025-02-13T20:14:55.086091170Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:14:55.086152 containerd[1857]: time="2025-02-13T20:14:55.086102890Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:14:55.086288 containerd[1857]: time="2025-02-13T20:14:55.086253211Z" level=info msg="StopPodSandbox for \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\"" Feb 13 20:14:55.086581 containerd[1857]: time="2025-02-13T20:14:55.086320651Z" level=info msg="TearDown network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" successfully" Feb 13 20:14:55.086581 containerd[1857]: time="2025-02-13T20:14:55.086340011Z" level=info msg="StopPodSandbox for \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" returns successfully" Feb 13 20:14:55.086901 containerd[1857]: time="2025-02-13T20:14:55.086873573Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\"" Feb 13 20:14:55.087284 containerd[1857]: time="2025-02-13T20:14:55.086878493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:6,}" Feb 13 20:14:55.087284 containerd[1857]: time="2025-02-13T20:14:55.087228774Z" level=info msg="TearDown network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" successfully" Feb 13 20:14:55.087284 containerd[1857]: time="2025-02-13T20:14:55.087244854Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" returns successfully" Feb 13 20:14:55.088048 containerd[1857]: time="2025-02-13T20:14:55.087909137Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" Feb 13 20:14:55.088048 containerd[1857]: time="2025-02-13T20:14:55.088010977Z" level=info msg="TearDown network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" successfully" Feb 13 20:14:55.088048 containerd[1857]: time="2025-02-13T20:14:55.088021897Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" returns successfully" Feb 13 20:14:55.089093 kubelet[3630]: I0213 20:14:55.088614 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa" Feb 13 20:14:55.089172 containerd[1857]: time="2025-02-13T20:14:55.088818620Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:14:55.089172 containerd[1857]: time="2025-02-13T20:14:55.088899541Z" level=info msg="TearDown network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" successfully" Feb 13 20:14:55.089172 containerd[1857]: time="2025-02-13T20:14:55.088909181Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" returns successfully" Feb 13 20:14:55.089686 containerd[1857]: time="2025-02-13T20:14:55.089628943Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:14:55.090139 containerd[1857]: time="2025-02-13T20:14:55.089824224Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:14:55.090139 containerd[1857]: time="2025-02-13T20:14:55.089840704Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:14:55.090139 containerd[1857]: time="2025-02-13T20:14:55.089900304Z" level=info msg="StopPodSandbox for \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\"" Feb 13 20:14:55.090139 containerd[1857]: time="2025-02-13T20:14:55.090025345Z" level=info msg="Ensure that sandbox dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa in task-service has been cleanup successfully" Feb 13 20:14:55.090822 containerd[1857]: time="2025-02-13T20:14:55.090800748Z" level=info msg="TearDown network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\" successfully" Feb 13 20:14:55.091235 containerd[1857]: time="2025-02-13T20:14:55.091193789Z" level=info msg="StopPodSandbox for \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\" returns successfully" Feb 13 20:14:55.091293 containerd[1857]: time="2025-02-13T20:14:55.091172589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:6,}" Feb 13 20:14:55.092848 containerd[1857]: time="2025-02-13T20:14:55.092555954Z" level=info msg="StopPodSandbox for \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\"" Feb 13 20:14:55.092848 containerd[1857]: time="2025-02-13T20:14:55.092653475Z" level=info msg="TearDown network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" successfully" Feb 13 20:14:55.092848 containerd[1857]: time="2025-02-13T20:14:55.092666195Z" level=info msg="StopPodSandbox for \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" returns successfully" Feb 13 20:14:55.093255 containerd[1857]: time="2025-02-13T20:14:55.093090236Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\"" Feb 13 20:14:55.093255 containerd[1857]: time="2025-02-13T20:14:55.093166237Z" level=info msg="TearDown network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" successfully" Feb 13 20:14:55.093255 containerd[1857]: time="2025-02-13T20:14:55.093175517Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" returns successfully" Feb 13 20:14:55.093665 containerd[1857]: time="2025-02-13T20:14:55.093492478Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" Feb 13 20:14:55.093665 containerd[1857]: time="2025-02-13T20:14:55.093571198Z" level=info msg="TearDown network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" successfully" Feb 13 20:14:55.093665 containerd[1857]: time="2025-02-13T20:14:55.093580158Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" returns successfully" Feb 13 20:14:55.094208 containerd[1857]: time="2025-02-13T20:14:55.093970240Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:14:55.094208 containerd[1857]: time="2025-02-13T20:14:55.094046880Z" level=info msg="TearDown network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" successfully" Feb 13 20:14:55.094208 containerd[1857]: time="2025-02-13T20:14:55.094057320Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" returns successfully" Feb 13 20:14:55.094904 containerd[1857]: time="2025-02-13T20:14:55.094451922Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:14:55.094904 containerd[1857]: time="2025-02-13T20:14:55.094524562Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:14:55.094904 containerd[1857]: time="2025-02-13T20:14:55.094543042Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:14:55.095037 kubelet[3630]: I0213 20:14:55.094679 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8" Feb 13 20:14:55.095246 containerd[1857]: time="2025-02-13T20:14:55.095211844Z" level=info msg="StopPodSandbox for \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\"" Feb 13 20:14:55.095423 containerd[1857]: time="2025-02-13T20:14:55.095399045Z" level=info msg="Ensure that sandbox d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8 in task-service has been cleanup successfully" Feb 13 20:14:55.095659 containerd[1857]: time="2025-02-13T20:14:55.095560726Z" level=info msg="TearDown network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\" successfully" Feb 13 20:14:55.095659 containerd[1857]: time="2025-02-13T20:14:55.095578086Z" level=info msg="StopPodSandbox for \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\" returns successfully" Feb 13 20:14:55.096220 containerd[1857]: time="2025-02-13T20:14:55.095795807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:6,}" Feb 13 20:14:55.097171 containerd[1857]: time="2025-02-13T20:14:55.096923531Z" level=info msg="StopPodSandbox for \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\"" Feb 13 20:14:55.097171 containerd[1857]: time="2025-02-13T20:14:55.097016411Z" level=info msg="TearDown network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" successfully" Feb 13 20:14:55.097171 containerd[1857]: time="2025-02-13T20:14:55.097026411Z" level=info msg="StopPodSandbox for \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" returns successfully" Feb 13 20:14:55.098164 containerd[1857]: time="2025-02-13T20:14:55.097976935Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\"" Feb 13 20:14:55.098164 containerd[1857]: time="2025-02-13T20:14:55.098111015Z" level=info msg="TearDown network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" successfully" Feb 13 20:14:55.098164 containerd[1857]: time="2025-02-13T20:14:55.098162576Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" returns successfully" Feb 13 20:14:55.099495 containerd[1857]: time="2025-02-13T20:14:55.099466980Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" Feb 13 20:14:55.099693 containerd[1857]: time="2025-02-13T20:14:55.099544741Z" level=info msg="TearDown network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" successfully" Feb 13 20:14:55.099693 containerd[1857]: time="2025-02-13T20:14:55.099554781Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" returns successfully" Feb 13 20:14:55.100192 containerd[1857]: time="2025-02-13T20:14:55.100041383Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:14:55.100192 containerd[1857]: time="2025-02-13T20:14:55.100136943Z" level=info msg="TearDown network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" successfully" Feb 13 20:14:55.100192 containerd[1857]: time="2025-02-13T20:14:55.100147903Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" returns successfully" Feb 13 20:14:55.102150 containerd[1857]: time="2025-02-13T20:14:55.102109510Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:14:55.102261 containerd[1857]: time="2025-02-13T20:14:55.102238031Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:14:55.102261 containerd[1857]: time="2025-02-13T20:14:55.102256231Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:14:55.103823 containerd[1857]: time="2025-02-13T20:14:55.103705357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:6,}" Feb 13 20:14:55.104847 kubelet[3630]: I0213 20:14:55.104817 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761" Feb 13 20:14:55.106437 containerd[1857]: time="2025-02-13T20:14:55.106229166Z" level=info msg="StopPodSandbox for \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\"" Feb 13 20:14:55.106437 containerd[1857]: time="2025-02-13T20:14:55.106436407Z" level=info msg="Ensure that sandbox 631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761 in task-service has been cleanup successfully" Feb 13 20:14:55.106876 containerd[1857]: time="2025-02-13T20:14:55.106684048Z" level=info msg="TearDown network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\" successfully" Feb 13 20:14:55.106876 containerd[1857]: time="2025-02-13T20:14:55.106707608Z" level=info msg="StopPodSandbox for \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\" returns successfully" Feb 13 20:14:55.108699 containerd[1857]: time="2025-02-13T20:14:55.108567455Z" level=info msg="StopPodSandbox for \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\"" Feb 13 20:14:55.108768 containerd[1857]: time="2025-02-13T20:14:55.108704175Z" level=info msg="TearDown network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" successfully" Feb 13 20:14:55.108768 containerd[1857]: time="2025-02-13T20:14:55.108737696Z" level=info msg="StopPodSandbox for \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" returns successfully" Feb 13 20:14:55.110712 containerd[1857]: time="2025-02-13T20:14:55.110535022Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\"" Feb 13 20:14:55.111125 containerd[1857]: time="2025-02-13T20:14:55.110732903Z" level=info msg="TearDown network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" successfully" Feb 13 20:14:55.111125 containerd[1857]: time="2025-02-13T20:14:55.110747943Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" returns successfully" Feb 13 20:14:55.111261 containerd[1857]: time="2025-02-13T20:14:55.111229945Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" Feb 13 20:14:55.111356 containerd[1857]: time="2025-02-13T20:14:55.111335665Z" level=info msg="TearDown network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" successfully" Feb 13 20:14:55.111388 containerd[1857]: time="2025-02-13T20:14:55.111359425Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" returns successfully" Feb 13 20:14:55.112132 containerd[1857]: time="2025-02-13T20:14:55.111800147Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:14:55.112132 containerd[1857]: time="2025-02-13T20:14:55.111913948Z" level=info msg="TearDown network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" successfully" Feb 13 20:14:55.112132 containerd[1857]: time="2025-02-13T20:14:55.111944228Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" returns successfully" Feb 13 20:14:55.121946 containerd[1857]: time="2025-02-13T20:14:55.121862865Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:14:55.122169 containerd[1857]: time="2025-02-13T20:14:55.122057626Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:14:55.122169 containerd[1857]: time="2025-02-13T20:14:55.122072266Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:14:55.122921 kubelet[3630]: I0213 20:14:55.122895 3630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798" Feb 13 20:14:55.123825 containerd[1857]: time="2025-02-13T20:14:55.123567472Z" level=info msg="StopPodSandbox for \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\"" Feb 13 20:14:55.123825 containerd[1857]: time="2025-02-13T20:14:55.123754832Z" level=info msg="Ensure that sandbox 6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798 in task-service has been cleanup successfully" Feb 13 20:14:55.124112 containerd[1857]: time="2025-02-13T20:14:55.123998473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:6,}" Feb 13 20:14:55.125026 containerd[1857]: time="2025-02-13T20:14:55.124941917Z" level=info msg="TearDown network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\" successfully" Feb 13 20:14:55.125026 containerd[1857]: time="2025-02-13T20:14:55.124973037Z" level=info msg="StopPodSandbox for \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\" returns successfully" Feb 13 20:14:55.125554 containerd[1857]: time="2025-02-13T20:14:55.125449119Z" level=info msg="StopPodSandbox for \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\"" Feb 13 20:14:55.125745 containerd[1857]: time="2025-02-13T20:14:55.125674160Z" level=info msg="TearDown network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" successfully" Feb 13 20:14:55.125745 containerd[1857]: time="2025-02-13T20:14:55.125689600Z" level=info msg="StopPodSandbox for \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" returns successfully" Feb 13 20:14:55.126973 containerd[1857]: time="2025-02-13T20:14:55.126945244Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\"" Feb 13 20:14:55.127049 containerd[1857]: time="2025-02-13T20:14:55.127020685Z" level=info msg="TearDown network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" successfully" Feb 13 20:14:55.127049 containerd[1857]: time="2025-02-13T20:14:55.127041165Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" returns successfully" Feb 13 20:14:55.127463 containerd[1857]: time="2025-02-13T20:14:55.127429886Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" Feb 13 20:14:55.127526 containerd[1857]: time="2025-02-13T20:14:55.127507807Z" level=info msg="TearDown network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" successfully" Feb 13 20:14:55.127526 containerd[1857]: time="2025-02-13T20:14:55.127517407Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" returns successfully" Feb 13 20:14:55.128162 containerd[1857]: time="2025-02-13T20:14:55.128126889Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:14:55.129183 containerd[1857]: time="2025-02-13T20:14:55.129027772Z" level=info msg="TearDown network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" successfully" Feb 13 20:14:55.129183 containerd[1857]: time="2025-02-13T20:14:55.129049532Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" returns successfully" Feb 13 20:14:55.129470 containerd[1857]: time="2025-02-13T20:14:55.129442534Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:14:55.129924 containerd[1857]: time="2025-02-13T20:14:55.129525374Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:14:55.129924 containerd[1857]: time="2025-02-13T20:14:55.129538374Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:14:55.130406 containerd[1857]: time="2025-02-13T20:14:55.130281257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:6,}" Feb 13 20:14:55.321098 systemd[1]: run-netns-cni\x2d3f999e98\x2d6078\x2db883\x2d7817\x2d3a9135d98e15.mount: Deactivated successfully. Feb 13 20:14:55.321244 systemd[1]: run-netns-cni\x2d6775bdbf\x2dd064\x2d03b7\x2da518\x2dedf70d3a11b5.mount: Deactivated successfully. Feb 13 20:14:55.321318 systemd[1]: run-netns-cni\x2d8e91a8ec\x2d6758\x2ddbe1\x2d590c\x2d784d7230dcf6.mount: Deactivated successfully. Feb 13 20:14:55.321389 systemd[1]: run-netns-cni\x2d39c66986\x2dd5d8\x2de251\x2d4acf\x2d0ddf402badde.mount: Deactivated successfully. Feb 13 20:14:55.321468 systemd[1]: run-netns-cni\x2d2408e51b\x2d6e81\x2dd14c\x2d31a9\x2d1aac3f188068.mount: Deactivated successfully. Feb 13 20:14:55.321539 systemd[1]: run-netns-cni\x2dd33265f1\x2d998d\x2db97a\x2dba49\x2da71ba526a28b.mount: Deactivated successfully. Feb 13 20:15:00.391788 kubelet[3630]: I0213 20:15:00.391604 3630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:15:01.650317 kubelet[3630]: I0213 20:15:00.417566 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w9plx" podStartSLOduration=6.861359374 podStartE2EDuration="23.417545935s" podCreationTimestamp="2025-02-13 20:14:37 +0000 UTC" firstStartedPulling="2025-02-13 20:14:37.934984079 +0000 UTC m=+23.283990584" lastFinishedPulling="2025-02-13 20:14:54.49117064 +0000 UTC m=+39.840177145" observedRunningTime="2025-02-13 20:14:55.153115783 +0000 UTC m=+40.502122288" watchObservedRunningTime="2025-02-13 20:15:00.417545935 +0000 UTC m=+45.766552400" Feb 13 20:15:01.746723 kernel: bpftool[5727]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:15:02.905751 systemd-networkd[1373]: vxlan.calico: Link UP Feb 13 20:15:02.905762 systemd-networkd[1373]: vxlan.calico: Gained carrier Feb 13 20:15:04.089153 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Feb 13 20:15:07.850794 systemd-networkd[1373]: cali4c74fba7d49: Link UP Feb 13 20:15:07.853218 systemd-networkd[1373]: cali4c74fba7d49: Gained carrier Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.759 [INFO][5840] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0 calico-apiserver-6dff5cb7c5- calico-apiserver 0ebb571e-ba96-4825-a97f-086e3868f081 681 0 2025-02-13 20:14:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dff5cb7c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.1-a-1780829b1e calico-apiserver-6dff5cb7c5-snvwz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c74fba7d49 [] []}} ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.759 [INFO][5840] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.790 [INFO][5851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" HandleID="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.801 [INFO][5851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" HandleID="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fa0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.1-a-1780829b1e", "pod":"calico-apiserver-6dff5cb7c5-snvwz", "timestamp":"2025-02-13 20:15:07.790031531 +0000 UTC"}, Hostname:"ci-4152.2.1-a-1780829b1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.801 [INFO][5851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.801 [INFO][5851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.801 [INFO][5851] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-1780829b1e' Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.804 [INFO][5851] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.808 [INFO][5851] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.812 [INFO][5851] ipam/ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.816 [INFO][5851] ipam/ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.818 [INFO][5851] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.818 [INFO][5851] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.822 [INFO][5851] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537 Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.832 [INFO][5851] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.840 [INFO][5851] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.25.65/26] block=192.168.25.64/26 handle="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.840 [INFO][5851] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.65/26] handle="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.840 [INFO][5851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:07.881236 containerd[1857]: 2025-02-13 20:15:07.841 [INFO][5851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.65/26] IPv6=[] ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" HandleID="k8s-pod-network.b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" Feb 13 20:15:07.882763 containerd[1857]: 2025-02-13 20:15:07.844 [INFO][5840] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0", GenerateName:"calico-apiserver-6dff5cb7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ebb571e-ba96-4825-a97f-086e3868f081", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff5cb7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"", Pod:"calico-apiserver-6dff5cb7c5-snvwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c74fba7d49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:07.882763 containerd[1857]: 2025-02-13 20:15:07.845 [INFO][5840] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.25.65/32] ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" Feb 13 20:15:07.882763 containerd[1857]: 2025-02-13 20:15:07.845 [INFO][5840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c74fba7d49 ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" Feb 13 20:15:07.882763 containerd[1857]: 2025-02-13 20:15:07.855 [INFO][5840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" Feb 13 20:15:07.882763 containerd[1857]: 2025-02-13 20:15:07.856 [INFO][5840] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0", GenerateName:"calico-apiserver-6dff5cb7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ebb571e-ba96-4825-a97f-086e3868f081", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff5cb7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537", Pod:"calico-apiserver-6dff5cb7c5-snvwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c74fba7d49", MAC:"56:b9:d8:87:36:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:07.882763 containerd[1857]: 2025-02-13 20:15:07.875 [INFO][5840] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-snvwz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--snvwz-eth0" Feb 13 20:15:08.213586 systemd-networkd[1373]: cali6d7ad68be58: Link UP Feb 13 20:15:08.214780 systemd-networkd[1373]: cali6d7ad68be58: Gained carrier Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.126 [INFO][5875] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0 calico-apiserver-6dff5cb7c5- calico-apiserver 7056d74b-e474-45a1-8d67-963b49a257e9 680 0 2025-02-13 20:14:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dff5cb7c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.1-a-1780829b1e calico-apiserver-6dff5cb7c5-sdjn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d7ad68be58 [] []}} ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.126 [INFO][5875] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.155 [INFO][5886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" HandleID="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.168 [INFO][5886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" HandleID="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000223140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.1-a-1780829b1e", "pod":"calico-apiserver-6dff5cb7c5-sdjn4", "timestamp":"2025-02-13 20:15:08.155813834 +0000 UTC"}, Hostname:"ci-4152.2.1-a-1780829b1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.168 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.168 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.168 [INFO][5886] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-1780829b1e' Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.170 [INFO][5886] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.174 [INFO][5886] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.178 [INFO][5886] ipam/ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.180 [INFO][5886] ipam/ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.183 [INFO][5886] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.183 [INFO][5886] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.184 [INFO][5886] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1 Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.191 [INFO][5886] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.202 [INFO][5886] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.25.66/26] block=192.168.25.64/26 handle="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.202 [INFO][5886] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.66/26] handle="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.202 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:08.234084 containerd[1857]: 2025-02-13 20:15:08.202 [INFO][5886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.66/26] IPv6=[] ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" HandleID="k8s-pod-network.e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" Feb 13 20:15:08.234662 containerd[1857]: 2025-02-13 20:15:08.206 [INFO][5875] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0", GenerateName:"calico-apiserver-6dff5cb7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7056d74b-e474-45a1-8d67-963b49a257e9", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff5cb7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"", Pod:"calico-apiserver-6dff5cb7c5-sdjn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d7ad68be58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.234662 containerd[1857]: 2025-02-13 20:15:08.207 [INFO][5875] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.25.66/32] ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" Feb 13 20:15:08.234662 containerd[1857]: 2025-02-13 20:15:08.207 [INFO][5875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d7ad68be58 ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" Feb 13 20:15:08.234662 containerd[1857]: 2025-02-13 20:15:08.212 [INFO][5875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" Feb 13 20:15:08.234662 containerd[1857]: 2025-02-13 20:15:08.216 [INFO][5875] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0", GenerateName:"calico-apiserver-6dff5cb7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7056d74b-e474-45a1-8d67-963b49a257e9", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff5cb7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1", Pod:"calico-apiserver-6dff5cb7c5-sdjn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d7ad68be58", MAC:"fa:9d:06:1f:21:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.234662 containerd[1857]: 2025-02-13 20:15:08.231 [INFO][5875] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1" Namespace="calico-apiserver" Pod="calico-apiserver-6dff5cb7c5-sdjn4" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--apiserver--6dff5cb7c5--sdjn4-eth0" Feb 13 20:15:08.314585 containerd[1857]: time="2025-02-13T20:15:08.313742276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:08.314878 containerd[1857]: time="2025-02-13T20:15:08.314664120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:08.314878 containerd[1857]: time="2025-02-13T20:15:08.314710680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.314878 containerd[1857]: time="2025-02-13T20:15:08.314819320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.378919 containerd[1857]: time="2025-02-13T20:15:08.378721108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-snvwz,Uid:0ebb571e-ba96-4825-a97f-086e3868f081,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537\"" Feb 13 20:15:08.384516 containerd[1857]: time="2025-02-13T20:15:08.384296728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:15:08.567212 systemd-networkd[1373]: cali2ac9b9db600: Link UP Feb 13 20:15:08.568555 systemd-networkd[1373]: cali2ac9b9db600: Gained carrier Feb 13 20:15:08.592654 containerd[1857]: time="2025-02-13T20:15:08.587338651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:08.592654 containerd[1857]: time="2025-02-13T20:15:08.587419131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:08.592654 containerd[1857]: time="2025-02-13T20:15:08.587434611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.592654 containerd[1857]: time="2025-02-13T20:15:08.587548452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.444 [INFO][5947] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0 coredns-7db6d8ff4d- kube-system ce92c99c-5055-43b1-b9d9-2f6d29334b94 678 0 2025-02-13 20:14:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.1-a-1780829b1e coredns-7db6d8ff4d-q8k52 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ac9b9db600 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.445 [INFO][5947] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.483 [INFO][5959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" HandleID="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Workload="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.497 [INFO][5959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" HandleID="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Workload="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bcbb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.1-a-1780829b1e", "pod":"coredns-7db6d8ff4d-q8k52", "timestamp":"2025-02-13 20:15:08.483848722 +0000 UTC"}, Hostname:"ci-4152.2.1-a-1780829b1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.498 [INFO][5959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.498 [INFO][5959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.498 [INFO][5959] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-1780829b1e' Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.501 [INFO][5959] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.506 [INFO][5959] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.517 [INFO][5959] ipam/ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.520 [INFO][5959] ipam/ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.524 [INFO][5959] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.524 [INFO][5959] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.527 [INFO][5959] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7 Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.532 [INFO][5959] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.547 [INFO][5959] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.25.67/26] block=192.168.25.64/26 handle="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.547 [INFO][5959] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.67/26] handle="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.547 [INFO][5959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:08.605887 containerd[1857]: 2025-02-13 20:15:08.547 [INFO][5959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.67/26] IPv6=[] ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" HandleID="k8s-pod-network.51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Workload="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" Feb 13 20:15:08.606544 containerd[1857]: 2025-02-13 20:15:08.550 [INFO][5947] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce92c99c-5055-43b1-b9d9-2f6d29334b94", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"", Pod:"coredns-7db6d8ff4d-q8k52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ac9b9db600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.606544 containerd[1857]: 2025-02-13 20:15:08.551 [INFO][5947] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.25.67/32] ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" Feb 13 20:15:08.606544 containerd[1857]: 2025-02-13 20:15:08.551 [INFO][5947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ac9b9db600 ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" Feb 13 20:15:08.606544 containerd[1857]: 2025-02-13 20:15:08.569 [INFO][5947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" Feb 13 20:15:08.606544 containerd[1857]: 2025-02-13 20:15:08.570 [INFO][5947] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce92c99c-5055-43b1-b9d9-2f6d29334b94", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7", Pod:"coredns-7db6d8ff4d-q8k52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ac9b9db600", MAC:"62:0e:f1:b7:4e:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.606544 containerd[1857]: 2025-02-13 20:15:08.599 [INFO][5947] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8k52" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--q8k52-eth0" Feb 13 20:15:08.799545 systemd-networkd[1373]: califb49595a742: Link UP Feb 13 20:15:08.807499 containerd[1857]: time="2025-02-13T20:15:08.807283388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:08.807950 containerd[1857]: time="2025-02-13T20:15:08.807828988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:08.811407 containerd[1857]: time="2025-02-13T20:15:08.810983232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.811407 containerd[1857]: time="2025-02-13T20:15:08.811225952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.811990 containerd[1857]: time="2025-02-13T20:15:08.811409712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff5cb7c5-sdjn4,Uid:7056d74b-e474-45a1-8d67-963b49a257e9,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1\"" Feb 13 20:15:08.815828 systemd-networkd[1373]: califb49595a742: Gained carrier Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.573 [INFO][5965] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0 calico-kube-controllers-6c647b4f5b- calico-system 64b944df-0855-47f3-9fea-a03847895d35 679 0 2025-02-13 20:14:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c647b4f5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152.2.1-a-1780829b1e calico-kube-controllers-6c647b4f5b-k5qm9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califb49595a742 [] []}} ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.573 [INFO][5965] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.683 [INFO][5998] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" HandleID="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.708 [INFO][5998] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" HandleID="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aba10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.1-a-1780829b1e", "pod":"calico-kube-controllers-6c647b4f5b-k5qm9", "timestamp":"2025-02-13 20:15:08.683175352 +0000 UTC"}, Hostname:"ci-4152.2.1-a-1780829b1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.709 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.709 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.709 [INFO][5998] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-1780829b1e' Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.715 [INFO][5998] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.724 [INFO][5998] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.736 [INFO][5998] ipam/ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.741 [INFO][5998] ipam/ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.745 [INFO][5998] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.745 [INFO][5998] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.750 [INFO][5998] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93 Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.772 [INFO][5998] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.783 [INFO][5998] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.25.68/26] block=192.168.25.64/26 handle="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.783 [INFO][5998] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.68/26] handle="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.783 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:08.851844 containerd[1857]: 2025-02-13 20:15:08.783 [INFO][5998] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.68/26] IPv6=[] ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" HandleID="k8s-pod-network.927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Workload="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" Feb 13 20:15:08.852415 containerd[1857]: 2025-02-13 20:15:08.792 [INFO][5965] cni-plugin/k8s.go 386: Populated endpoint ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0", GenerateName:"calico-kube-controllers-6c647b4f5b-", Namespace:"calico-system", SelfLink:"", UID:"64b944df-0855-47f3-9fea-a03847895d35", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c647b4f5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"", Pod:"calico-kube-controllers-6c647b4f5b-k5qm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb49595a742", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.852415 containerd[1857]: 2025-02-13 20:15:08.792 [INFO][5965] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.25.68/32] ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" Feb 13 20:15:08.852415 containerd[1857]: 2025-02-13 20:15:08.792 [INFO][5965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb49595a742 ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" Feb 13 20:15:08.852415 containerd[1857]: 2025-02-13 20:15:08.816 [INFO][5965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" Feb 13 20:15:08.852415 containerd[1857]: 2025-02-13 20:15:08.817 [INFO][5965] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0", GenerateName:"calico-kube-controllers-6c647b4f5b-", Namespace:"calico-system", SelfLink:"", UID:"64b944df-0855-47f3-9fea-a03847895d35", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c647b4f5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93", Pod:"calico-kube-controllers-6c647b4f5b-k5qm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb49595a742", MAC:"82:08:31:ca:a9:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.852415 containerd[1857]: 2025-02-13 20:15:08.843 [INFO][5965] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93" Namespace="calico-system" Pod="calico-kube-controllers-6c647b4f5b-k5qm9" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-calico--kube--controllers--6c647b4f5b--k5qm9-eth0" Feb 13 20:15:08.922828 containerd[1857]: time="2025-02-13T20:15:08.922734079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:08.923668 containerd[1857]: time="2025-02-13T20:15:08.923367719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:08.923668 containerd[1857]: time="2025-02-13T20:15:08.923398999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.923668 containerd[1857]: time="2025-02-13T20:15:08.923539400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:08.944110 systemd-networkd[1373]: cali3bd65b5fbba: Link UP Feb 13 20:15:08.946622 systemd-networkd[1373]: cali3bd65b5fbba: Gained carrier Feb 13 20:15:08.948078 containerd[1857]: time="2025-02-13T20:15:08.948046427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8k52,Uid:ce92c99c-5055-43b1-b9d9-2f6d29334b94,Namespace:kube-system,Attempt:6,} returns sandbox id \"51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7\"" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.727 [INFO][6031] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0 coredns-7db6d8ff4d- kube-system 523b7af8-9ff0-42ef-9588-a9c433e599e8 675 0 2025-02-13 20:14:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.1-a-1780829b1e coredns-7db6d8ff4d-7fsxn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3bd65b5fbba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.727 [INFO][6031] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.833 [INFO][6064] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" HandleID="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Workload="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.872 [INFO][6064] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" HandleID="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Workload="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.1-a-1780829b1e", "pod":"coredns-7db6d8ff4d-7fsxn", "timestamp":"2025-02-13 20:15:08.833204097 +0000 UTC"}, Hostname:"ci-4152.2.1-a-1780829b1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.872 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.872 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.872 [INFO][6064] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-1780829b1e' Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.877 [INFO][6064] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.884 [INFO][6064] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.892 [INFO][6064] ipam/ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.896 [INFO][6064] ipam/ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.905 [INFO][6064] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.905 [INFO][6064] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.908 [INFO][6064] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88 Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.915 [INFO][6064] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.928 [INFO][6064] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.25.69/26] block=192.168.25.64/26 handle="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.928 [INFO][6064] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.69/26] handle="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.928 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:08.990151 containerd[1857]: 2025-02-13 20:15:08.928 [INFO][6064] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.69/26] IPv6=[] ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" HandleID="k8s-pod-network.6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Workload="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" Feb 13 20:15:08.991199 containerd[1857]: 2025-02-13 20:15:08.933 [INFO][6031] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"523b7af8-9ff0-42ef-9588-a9c433e599e8", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"", Pod:"coredns-7db6d8ff4d-7fsxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3bd65b5fbba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.991199 containerd[1857]: 2025-02-13 20:15:08.933 [INFO][6031] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.25.69/32] ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" Feb 13 20:15:08.991199 containerd[1857]: 2025-02-13 20:15:08.933 [INFO][6031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bd65b5fbba ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" Feb 13 20:15:08.991199 containerd[1857]: 2025-02-13 20:15:08.948 [INFO][6031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" Feb 13 20:15:08.991199 containerd[1857]: 2025-02-13 20:15:08.954 [INFO][6031] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"523b7af8-9ff0-42ef-9588-a9c433e599e8", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88", Pod:"coredns-7db6d8ff4d-7fsxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3bd65b5fbba", MAC:"7a:85:47:fc:d4:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:08.991199 containerd[1857]: 2025-02-13 20:15:08.976 [INFO][6031] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7fsxn" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-coredns--7db6d8ff4d--7fsxn-eth0" Feb 13 20:15:09.001061 containerd[1857]: time="2025-02-13T20:15:08.998285845Z" level=info msg="CreateContainer within sandbox \"51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:15:09.042630 systemd-networkd[1373]: calif6d4929fe03: Link UP Feb 13 20:15:09.051147 systemd-networkd[1373]: calif6d4929fe03: Gained carrier Feb 13 20:15:09.060380 containerd[1857]: time="2025-02-13T20:15:09.059936995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c647b4f5b-k5qm9,Uid:64b944df-0855-47f3-9fea-a03847895d35,Namespace:calico-system,Attempt:6,} returns sandbox id \"927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93\"" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.682 [INFO][6004] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0 csi-node-driver- calico-system 7a0204f2-4591-46c1-b244-ab8ca59a1ddb 596 0 2025-02-13 20:14:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152.2.1-a-1780829b1e csi-node-driver-lvfcz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif6d4929fe03 [] []}} ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.682 [INFO][6004] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.864 [INFO][6056] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" HandleID="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Workload="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.887 [INFO][6056] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" HandleID="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Workload="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002688a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.1-a-1780829b1e", "pod":"csi-node-driver-lvfcz", "timestamp":"2025-02-13 20:15:08.859570967 +0000 UTC"}, Hostname:"ci-4152.2.1-a-1780829b1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.887 [INFO][6056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.928 [INFO][6056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.928 [INFO][6056] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-1780829b1e' Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.932 [INFO][6056] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.947 [INFO][6056] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.962 [INFO][6056] ipam/ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.976 [INFO][6056] ipam/ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.987 [INFO][6056] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.987 [INFO][6056] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:08.993 [INFO][6056] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58 Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:09.006 [INFO][6056] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:09.028 [INFO][6056] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.25.70/26] block=192.168.25.64/26 handle="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:09.028 [INFO][6056] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.70/26] handle="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" host="ci-4152.2.1-a-1780829b1e" Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:09.028 [INFO][6056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:09.071937 containerd[1857]: 2025-02-13 20:15:09.028 [INFO][6056] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.70/26] IPv6=[] ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" HandleID="k8s-pod-network.0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Workload="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" Feb 13 20:15:09.072511 containerd[1857]: 2025-02-13 20:15:09.034 [INFO][6004] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a0204f2-4591-46c1-b244-ab8ca59a1ddb", ResourceVersion:"596", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"", Pod:"csi-node-driver-lvfcz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6d4929fe03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:09.072511 containerd[1857]: 2025-02-13 20:15:09.035 [INFO][6004] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.25.70/32] ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" Feb 13 20:15:09.072511 containerd[1857]: 2025-02-13 20:15:09.035 [INFO][6004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6d4929fe03 ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" Feb 13 20:15:09.072511 containerd[1857]: 2025-02-13 20:15:09.046 [INFO][6004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" Feb 13 20:15:09.072511 containerd[1857]: 2025-02-13 20:15:09.052 [INFO][6004] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a0204f2-4591-46c1-b244-ab8ca59a1ddb", ResourceVersion:"596", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-1780829b1e", ContainerID:"0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58", Pod:"csi-node-driver-lvfcz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6d4929fe03", MAC:"82:5c:32:d6:74:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:09.072511 containerd[1857]: 2025-02-13 20:15:09.066 [INFO][6004] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58" Namespace="calico-system" Pod="csi-node-driver-lvfcz" WorkloadEndpoint="ci--4152.2.1--a--1780829b1e-k8s-csi--node--driver--lvfcz-eth0" Feb 13 20:15:09.144760 systemd-networkd[1373]: cali4c74fba7d49: Gained IPv6LL Feb 13 20:15:09.161246 containerd[1857]: time="2025-02-13T20:15:09.160811789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:09.161246 containerd[1857]: time="2025-02-13T20:15:09.160887509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:09.161246 containerd[1857]: time="2025-02-13T20:15:09.160902829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:09.161246 containerd[1857]: time="2025-02-13T20:15:09.161009509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:09.243101 containerd[1857]: time="2025-02-13T20:15:09.243044643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7fsxn,Uid:523b7af8-9ff0-42ef-9588-a9c433e599e8,Namespace:kube-system,Attempt:6,} returns sandbox id \"6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88\"" Feb 13 20:15:09.250027 containerd[1857]: time="2025-02-13T20:15:09.249969090Z" level=info msg="CreateContainer within sandbox \"6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:15:09.314495 containerd[1857]: time="2025-02-13T20:15:09.313062042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:09.314495 containerd[1857]: time="2025-02-13T20:15:09.313319082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:09.314495 containerd[1857]: time="2025-02-13T20:15:09.313336082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:09.314495 containerd[1857]: time="2025-02-13T20:15:09.313443363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:09.353101 containerd[1857]: time="2025-02-13T20:15:09.353054808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvfcz,Uid:7a0204f2-4591-46c1-b244-ab8ca59a1ddb,Namespace:calico-system,Attempt:6,} returns sandbox id \"0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58\"" Feb 13 20:15:09.595115 containerd[1857]: time="2025-02-13T20:15:09.595057682Z" level=info msg="CreateContainer within sandbox \"51ce59992e657212f6125abdf4346af7e41270b2683c00c2285241b2490da7a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4afa9ddd421b03ee490680304cad6b5c6348a85e79fdbd7396a87bc092a7fbd5\"" Feb 13 20:15:09.595914 containerd[1857]: time="2025-02-13T20:15:09.595743363Z" level=info msg="StartContainer for \"4afa9ddd421b03ee490680304cad6b5c6348a85e79fdbd7396a87bc092a7fbd5\"" Feb 13 20:15:09.651004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699865859.mount: Deactivated successfully. Feb 13 20:15:09.751811 containerd[1857]: time="2025-02-13T20:15:09.751722500Z" level=info msg="StartContainer for \"4afa9ddd421b03ee490680304cad6b5c6348a85e79fdbd7396a87bc092a7fbd5\" returns successfully" Feb 13 20:15:09.785267 systemd-networkd[1373]: cali6d7ad68be58: Gained IPv6LL Feb 13 20:15:09.959200 containerd[1857]: time="2025-02-13T20:15:09.958938696Z" level=info msg="CreateContainer within sandbox \"6ee67861f3ebc59147a70e414c8f78e7f1e10434f9cf89ee192f50b053675d88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd34f5fdabc47cc85b67e83abd18040c806c35fb2b5026f7b06932f45a0abb52\"" Feb 13 20:15:09.961210 containerd[1857]: time="2025-02-13T20:15:09.961175018Z" level=info msg="StartContainer for \"cd34f5fdabc47cc85b67e83abd18040c806c35fb2b5026f7b06932f45a0abb52\"" Feb 13 20:15:09.978121 systemd-networkd[1373]: cali3bd65b5fbba: Gained IPv6LL Feb 13 20:15:10.055598 containerd[1857]: time="2025-02-13T20:15:10.055044285Z" level=info msg="StartContainer for \"cd34f5fdabc47cc85b67e83abd18040c806c35fb2b5026f7b06932f45a0abb52\" returns successfully" Feb 13 20:15:10.250767 kubelet[3630]: I0213 20:15:10.249491 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-q8k52" podStartSLOduration=40.249453786 podStartE2EDuration="40.249453786s" podCreationTimestamp="2025-02-13 20:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:10.248693825 +0000 UTC m=+55.597700330" watchObservedRunningTime="2025-02-13 20:15:10.249453786 +0000 UTC m=+55.598460291" Feb 13 20:15:10.488771 systemd-networkd[1373]: cali2ac9b9db600: Gained IPv6LL Feb 13 20:15:10.552761 systemd-networkd[1373]: calif6d4929fe03: Gained IPv6LL Feb 13 20:15:10.680860 systemd-networkd[1373]: califb49595a742: Gained IPv6LL Feb 13 20:15:14.513870 containerd[1857]: time="2025-02-13T20:15:14.513812804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:14.516584 containerd[1857]: time="2025-02-13T20:15:14.516504533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 20:15:14.522234 containerd[1857]: time="2025-02-13T20:15:14.522171592Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:14.527733 containerd[1857]: time="2025-02-13T20:15:14.527633050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:14.528757 containerd[1857]: time="2025-02-13T20:15:14.528413333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 6.143900685s" Feb 13 20:15:14.528757 containerd[1857]: time="2025-02-13T20:15:14.528447493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 20:15:14.529793 containerd[1857]: time="2025-02-13T20:15:14.529497217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:15:14.531621 containerd[1857]: time="2025-02-13T20:15:14.531429743Z" level=info msg="CreateContainer within sandbox \"b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:15:14.585349 containerd[1857]: time="2025-02-13T20:15:14.585218244Z" level=info msg="CreateContainer within sandbox \"b915f745566d478681b19b56b491e8197d64ce2f1900b2a1d00c6ba9178b8537\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e165ea613c5e8848b972a43242dbb12b70eb637626b2cf1046807c4b01eb4d43\"" Feb 13 20:15:14.586957 containerd[1857]: time="2025-02-13T20:15:14.586909169Z" level=info msg="StartContainer for \"e165ea613c5e8848b972a43242dbb12b70eb637626b2cf1046807c4b01eb4d43\"" Feb 13 20:15:14.655241 containerd[1857]: time="2025-02-13T20:15:14.655193159Z" level=info msg="StartContainer for \"e165ea613c5e8848b972a43242dbb12b70eb637626b2cf1046807c4b01eb4d43\" returns successfully" Feb 13 20:15:14.773893 containerd[1857]: time="2025-02-13T20:15:14.773536716Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:15:14.774997 containerd[1857]: time="2025-02-13T20:15:14.774829240Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:15:14.774997 containerd[1857]: time="2025-02-13T20:15:14.774853560Z" level=info msg="StopPodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:15:14.775800 containerd[1857]: time="2025-02-13T20:15:14.775596923Z" level=info msg="RemovePodSandbox for \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:15:14.775800 containerd[1857]: time="2025-02-13T20:15:14.775628603Z" level=info msg="Forcibly stopping sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\"" Feb 13 20:15:14.775800 containerd[1857]: time="2025-02-13T20:15:14.775710363Z" level=info msg="TearDown network for sandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" successfully" Feb 13 20:15:14.791477 containerd[1857]: time="2025-02-13T20:15:14.790686693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.791477 containerd[1857]: time="2025-02-13T20:15:14.791125735Z" level=info msg="RemovePodSandbox \"6a7f9fe83c2c4b8f82f7a8bdc650ca97a5224674825b329c2ff2c89570b52d88\" returns successfully" Feb 13 20:15:14.792224 containerd[1857]: time="2025-02-13T20:15:14.791914337Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:15:14.792224 containerd[1857]: time="2025-02-13T20:15:14.792010778Z" level=info msg="TearDown network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" successfully" Feb 13 20:15:14.792224 containerd[1857]: time="2025-02-13T20:15:14.792020298Z" level=info msg="StopPodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" returns successfully" Feb 13 20:15:14.792358 containerd[1857]: time="2025-02-13T20:15:14.792304619Z" level=info msg="RemovePodSandbox for \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:15:14.792358 containerd[1857]: time="2025-02-13T20:15:14.792332899Z" level=info msg="Forcibly stopping sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\"" Feb 13 20:15:14.792497 containerd[1857]: time="2025-02-13T20:15:14.792395459Z" level=info msg="TearDown network for sandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" successfully" Feb 13 20:15:14.803683 containerd[1857]: time="2025-02-13T20:15:14.803613617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.803844 containerd[1857]: time="2025-02-13T20:15:14.803701737Z" level=info msg="RemovePodSandbox \"c6e629925fd496ff37d269ef37b65469d4771b6f8edfeb3fd05ce12dd2737fb0\" returns successfully" Feb 13 20:15:14.804458 containerd[1857]: time="2025-02-13T20:15:14.804192859Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" Feb 13 20:15:14.804458 containerd[1857]: time="2025-02-13T20:15:14.804285259Z" level=info msg="TearDown network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" successfully" Feb 13 20:15:14.804458 containerd[1857]: time="2025-02-13T20:15:14.804295619Z" level=info msg="StopPodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" returns successfully" Feb 13 20:15:14.805078 containerd[1857]: time="2025-02-13T20:15:14.804815581Z" level=info msg="RemovePodSandbox for \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" Feb 13 20:15:14.805078 containerd[1857]: time="2025-02-13T20:15:14.804941421Z" level=info msg="Forcibly stopping sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\"" Feb 13 20:15:14.805078 containerd[1857]: time="2025-02-13T20:15:14.805020381Z" level=info msg="TearDown network for sandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" successfully" Feb 13 20:15:14.813322 containerd[1857]: time="2025-02-13T20:15:14.812920048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.813322 containerd[1857]: time="2025-02-13T20:15:14.812983488Z" level=info msg="RemovePodSandbox \"a474c52deeb6582898bb676bcd483cb09befa2b1726390276a86ea3ac17b39c6\" returns successfully" Feb 13 20:15:14.814991 containerd[1857]: time="2025-02-13T20:15:14.814575413Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\"" Feb 13 20:15:14.814991 containerd[1857]: time="2025-02-13T20:15:14.814695014Z" level=info msg="TearDown network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" successfully" Feb 13 20:15:14.814991 containerd[1857]: time="2025-02-13T20:15:14.814706774Z" level=info msg="StopPodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" returns successfully" Feb 13 20:15:14.815741 containerd[1857]: time="2025-02-13T20:15:14.815547777Z" level=info msg="RemovePodSandbox for \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\"" Feb 13 20:15:14.815741 containerd[1857]: time="2025-02-13T20:15:14.815575177Z" level=info msg="Forcibly stopping sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\"" Feb 13 20:15:14.816648 containerd[1857]: time="2025-02-13T20:15:14.815971458Z" level=info msg="TearDown network for sandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" successfully" Feb 13 20:15:14.825011 containerd[1857]: time="2025-02-13T20:15:14.824968408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.825297 containerd[1857]: time="2025-02-13T20:15:14.825277969Z" level=info msg="RemovePodSandbox \"a3e02a89654720920db436dee6594f5f41f55afb175874e79707ef8050017320\" returns successfully" Feb 13 20:15:14.826523 containerd[1857]: time="2025-02-13T20:15:14.826104292Z" level=info msg="StopPodSandbox for \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\"" Feb 13 20:15:14.826523 containerd[1857]: time="2025-02-13T20:15:14.826196172Z" level=info msg="TearDown network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" successfully" Feb 13 20:15:14.826523 containerd[1857]: time="2025-02-13T20:15:14.826208652Z" level=info msg="StopPodSandbox for \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" returns successfully" Feb 13 20:15:14.827161 containerd[1857]: time="2025-02-13T20:15:14.826922095Z" level=info msg="RemovePodSandbox for \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\"" Feb 13 20:15:14.827161 containerd[1857]: time="2025-02-13T20:15:14.826947415Z" level=info msg="Forcibly stopping sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\"" Feb 13 20:15:14.827161 containerd[1857]: time="2025-02-13T20:15:14.827004495Z" level=info msg="TearDown network for sandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" successfully" Feb 13 20:15:14.843111 containerd[1857]: time="2025-02-13T20:15:14.843069629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.844431 containerd[1857]: time="2025-02-13T20:15:14.843315870Z" level=info msg="RemovePodSandbox \"87c61bc57320d3ce5f4bf47e6d48d7b5f766510dc09e1acb462b49300a642da8\" returns successfully" Feb 13 20:15:14.845109 containerd[1857]: time="2025-02-13T20:15:14.844967995Z" level=info msg="StopPodSandbox for \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\"" Feb 13 20:15:14.845554 containerd[1857]: time="2025-02-13T20:15:14.845344437Z" level=info msg="TearDown network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\" successfully" Feb 13 20:15:14.845554 containerd[1857]: time="2025-02-13T20:15:14.845372437Z" level=info msg="StopPodSandbox for \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\" returns successfully" Feb 13 20:15:14.845891 containerd[1857]: time="2025-02-13T20:15:14.845873358Z" level=info msg="RemovePodSandbox for \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\"" Feb 13 20:15:14.846045 containerd[1857]: time="2025-02-13T20:15:14.845951919Z" level=info msg="Forcibly stopping sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\"" Feb 13 20:15:14.846198 containerd[1857]: time="2025-02-13T20:15:14.846123799Z" level=info msg="TearDown network for sandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\" successfully" Feb 13 20:15:14.855050 containerd[1857]: time="2025-02-13T20:15:14.854875109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.855050 containerd[1857]: time="2025-02-13T20:15:14.854952869Z" level=info msg="RemovePodSandbox \"6dcba35b497f090a1f3ea7990ea92f65552ff8257cf8b206d9d492843793b798\" returns successfully" Feb 13 20:15:14.856086 containerd[1857]: time="2025-02-13T20:15:14.855907232Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:15:14.856086 containerd[1857]: time="2025-02-13T20:15:14.856006953Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:15:14.856086 containerd[1857]: time="2025-02-13T20:15:14.856016753Z" level=info msg="StopPodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:15:14.857377 containerd[1857]: time="2025-02-13T20:15:14.856875635Z" level=info msg="RemovePodSandbox for \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:15:14.857377 containerd[1857]: time="2025-02-13T20:15:14.857242997Z" level=info msg="Forcibly stopping sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\"" Feb 13 20:15:14.857377 containerd[1857]: time="2025-02-13T20:15:14.857333557Z" level=info msg="TearDown network for sandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" successfully" Feb 13 20:15:14.868942 containerd[1857]: time="2025-02-13T20:15:14.868904196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.869259 containerd[1857]: time="2025-02-13T20:15:14.869135997Z" level=info msg="RemovePodSandbox \"24b82eb14fa3ca31115c808c26fc0a41a8af6a0930998193bd6755de68d494df\" returns successfully" Feb 13 20:15:14.869669 containerd[1857]: time="2025-02-13T20:15:14.869598438Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:15:14.870003 containerd[1857]: time="2025-02-13T20:15:14.869804319Z" level=info msg="TearDown network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" successfully" Feb 13 20:15:14.870003 containerd[1857]: time="2025-02-13T20:15:14.869832879Z" level=info msg="StopPodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" returns successfully" Feb 13 20:15:14.870764 containerd[1857]: time="2025-02-13T20:15:14.870406881Z" level=info msg="RemovePodSandbox for \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:15:14.870764 containerd[1857]: time="2025-02-13T20:15:14.870439921Z" level=info msg="Forcibly stopping sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\"" Feb 13 20:15:14.870764 containerd[1857]: time="2025-02-13T20:15:14.870501161Z" level=info msg="TearDown network for sandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" successfully" Feb 13 20:15:14.878342 containerd[1857]: time="2025-02-13T20:15:14.877978986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.878342 containerd[1857]: time="2025-02-13T20:15:14.878051826Z" level=info msg="RemovePodSandbox \"3fd3ef508dc8db26da22a57cdbdcf562569646fdec2eb78cfcdf9e1f6e0e6a31\" returns successfully" Feb 13 20:15:14.878906 containerd[1857]: time="2025-02-13T20:15:14.878690629Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" Feb 13 20:15:14.881281 containerd[1857]: time="2025-02-13T20:15:14.881134637Z" level=info msg="TearDown network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" successfully" Feb 13 20:15:14.881281 containerd[1857]: time="2025-02-13T20:15:14.881159757Z" level=info msg="StopPodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" returns successfully" Feb 13 20:15:14.882341 containerd[1857]: time="2025-02-13T20:15:14.881720799Z" level=info msg="RemovePodSandbox for \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" Feb 13 20:15:14.882341 containerd[1857]: time="2025-02-13T20:15:14.881755999Z" level=info msg="Forcibly stopping sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\"" Feb 13 20:15:14.882341 containerd[1857]: time="2025-02-13T20:15:14.881858879Z" level=info msg="TearDown network for sandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" successfully" Feb 13 20:15:14.890904 containerd[1857]: time="2025-02-13T20:15:14.890849629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.891217 containerd[1857]: time="2025-02-13T20:15:14.891107470Z" level=info msg="RemovePodSandbox \"7312a86f949e559ced44a5adba0f7053530c5960ae27453f4d48a3c1581a4f67\" returns successfully" Feb 13 20:15:14.891887 containerd[1857]: time="2025-02-13T20:15:14.891591432Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\"" Feb 13 20:15:14.891887 containerd[1857]: time="2025-02-13T20:15:14.891707632Z" level=info msg="TearDown network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" successfully" Feb 13 20:15:14.891887 containerd[1857]: time="2025-02-13T20:15:14.891718232Z" level=info msg="StopPodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" returns successfully" Feb 13 20:15:14.892313 containerd[1857]: time="2025-02-13T20:15:14.892202594Z" level=info msg="RemovePodSandbox for \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\"" Feb 13 20:15:14.892313 containerd[1857]: time="2025-02-13T20:15:14.892240354Z" level=info msg="Forcibly stopping sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\"" Feb 13 20:15:14.892475 containerd[1857]: time="2025-02-13T20:15:14.892420675Z" level=info msg="TearDown network for sandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" successfully" Feb 13 20:15:14.902152 containerd[1857]: time="2025-02-13T20:15:14.901983147Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.902152 containerd[1857]: time="2025-02-13T20:15:14.902046387Z" level=info msg="RemovePodSandbox \"9347f5612b96f45b4c76f895119fa5d79b6ff573d6eb1243fc533b8f3bb963a1\" returns successfully" Feb 13 20:15:14.902959 containerd[1857]: time="2025-02-13T20:15:14.902651669Z" level=info msg="StopPodSandbox for \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\"" Feb 13 20:15:14.902959 containerd[1857]: time="2025-02-13T20:15:14.902744669Z" level=info msg="TearDown network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" successfully" Feb 13 20:15:14.902959 containerd[1857]: time="2025-02-13T20:15:14.902754269Z" level=info msg="StopPodSandbox for \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" returns successfully" Feb 13 20:15:14.903549 containerd[1857]: time="2025-02-13T20:15:14.903331711Z" level=info msg="RemovePodSandbox for \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\"" Feb 13 20:15:14.903549 containerd[1857]: time="2025-02-13T20:15:14.903354391Z" level=info msg="Forcibly stopping sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\"" Feb 13 20:15:14.903549 containerd[1857]: time="2025-02-13T20:15:14.903415192Z" level=info msg="TearDown network for sandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" successfully" Feb 13 20:15:14.914853 containerd[1857]: time="2025-02-13T20:15:14.914482869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.914853 containerd[1857]: time="2025-02-13T20:15:14.914548669Z" level=info msg="RemovePodSandbox \"19487b43502591b2820507815eb9902c660344c8a981635bbac64b4fc45e07e0\" returns successfully" Feb 13 20:15:14.915397 containerd[1857]: time="2025-02-13T20:15:14.915212991Z" level=info msg="StopPodSandbox for \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\"" Feb 13 20:15:14.915397 containerd[1857]: time="2025-02-13T20:15:14.915316752Z" level=info msg="TearDown network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\" successfully" Feb 13 20:15:14.915397 containerd[1857]: time="2025-02-13T20:15:14.915328712Z" level=info msg="StopPodSandbox for \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\" returns successfully" Feb 13 20:15:14.916219 containerd[1857]: time="2025-02-13T20:15:14.915979674Z" level=info msg="RemovePodSandbox for \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\"" Feb 13 20:15:14.916219 containerd[1857]: time="2025-02-13T20:15:14.916086634Z" level=info msg="Forcibly stopping sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\"" Feb 13 20:15:14.916219 containerd[1857]: time="2025-02-13T20:15:14.916160354Z" level=info msg="TearDown network for sandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\" successfully" Feb 13 20:15:14.931844 containerd[1857]: time="2025-02-13T20:15:14.930708603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.931844 containerd[1857]: time="2025-02-13T20:15:14.930782643Z" level=info msg="RemovePodSandbox \"631521406d9ea837ddbfb9bc12204ee3f2f775cc82b828371dbba1d69ffb6761\" returns successfully" Feb 13 20:15:14.932796 containerd[1857]: time="2025-02-13T20:15:14.932532449Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:15:14.932796 containerd[1857]: time="2025-02-13T20:15:14.932626170Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:15:14.932796 containerd[1857]: time="2025-02-13T20:15:14.932650930Z" level=info msg="StopPodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:15:14.933661 containerd[1857]: time="2025-02-13T20:15:14.933048811Z" level=info msg="RemovePodSandbox for \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:15:14.933661 containerd[1857]: time="2025-02-13T20:15:14.933079451Z" level=info msg="Forcibly stopping sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\"" Feb 13 20:15:14.933661 containerd[1857]: time="2025-02-13T20:15:14.933148851Z" level=info msg="TearDown network for sandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" successfully" Feb 13 20:15:14.947787 containerd[1857]: time="2025-02-13T20:15:14.947741460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.948117 containerd[1857]: time="2025-02-13T20:15:14.948000581Z" level=info msg="RemovePodSandbox \"da5bd2d807c53d437cbc3413261254fabcc69a5ad43930513afb9c48ac09b252\" returns successfully" Feb 13 20:15:14.948548 containerd[1857]: time="2025-02-13T20:15:14.948520543Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:15:14.948647 containerd[1857]: time="2025-02-13T20:15:14.948623343Z" level=info msg="TearDown network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" successfully" Feb 13 20:15:14.948696 containerd[1857]: time="2025-02-13T20:15:14.948655983Z" level=info msg="StopPodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" returns successfully" Feb 13 20:15:14.949002 containerd[1857]: time="2025-02-13T20:15:14.948978185Z" level=info msg="RemovePodSandbox for \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:15:14.949075 containerd[1857]: time="2025-02-13T20:15:14.949007385Z" level=info msg="Forcibly stopping sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\"" Feb 13 20:15:14.949101 containerd[1857]: time="2025-02-13T20:15:14.949078665Z" level=info msg="TearDown network for sandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" successfully" Feb 13 20:15:14.968889 containerd[1857]: time="2025-02-13T20:15:14.968731571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.968889 containerd[1857]: time="2025-02-13T20:15:14.968804611Z" level=info msg="RemovePodSandbox \"c5e6f8ba481f3e908899f3f6826f86292762ea47a16fbcc7180de41b6f8ff0ba\" returns successfully" Feb 13 20:15:14.969421 containerd[1857]: time="2025-02-13T20:15:14.969391293Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" Feb 13 20:15:14.969519 containerd[1857]: time="2025-02-13T20:15:14.969501893Z" level=info msg="TearDown network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" successfully" Feb 13 20:15:14.969669 containerd[1857]: time="2025-02-13T20:15:14.969562374Z" level=info msg="StopPodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" returns successfully" Feb 13 20:15:14.969954 containerd[1857]: time="2025-02-13T20:15:14.969933455Z" level=info msg="RemovePodSandbox for \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" Feb 13 20:15:14.969994 containerd[1857]: time="2025-02-13T20:15:14.969960255Z" level=info msg="Forcibly stopping sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\"" Feb 13 20:15:14.970644 containerd[1857]: time="2025-02-13T20:15:14.970024695Z" level=info msg="TearDown network for sandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" successfully" Feb 13 20:15:14.994492 containerd[1857]: time="2025-02-13T20:15:14.994231296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:14.994492 containerd[1857]: time="2025-02-13T20:15:14.994302617Z" level=info msg="RemovePodSandbox \"ab0004d21f3b2d8b497e1e07973e346b3e7e047a544d202f24c24633cf27fe79\" returns successfully" Feb 13 20:15:14.995993 containerd[1857]: time="2025-02-13T20:15:14.995169340Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\"" Feb 13 20:15:14.995993 containerd[1857]: time="2025-02-13T20:15:14.995312860Z" level=info msg="TearDown network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" successfully" Feb 13 20:15:14.995993 containerd[1857]: time="2025-02-13T20:15:14.995323780Z" level=info msg="StopPodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" returns successfully" Feb 13 20:15:14.996683 containerd[1857]: time="2025-02-13T20:15:14.996626384Z" level=info msg="RemovePodSandbox for \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\"" Feb 13 20:15:14.996758 containerd[1857]: time="2025-02-13T20:15:14.996702025Z" level=info msg="Forcibly stopping sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\"" Feb 13 20:15:14.999291 containerd[1857]: time="2025-02-13T20:15:14.999234673Z" level=info msg="TearDown network for sandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" successfully" Feb 13 20:15:15.026813 containerd[1857]: time="2025-02-13T20:15:15.026521485Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:15.035113 containerd[1857]: time="2025-02-13T20:15:15.034850593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:15:15.040519 containerd[1857]: time="2025-02-13T20:15:15.040401491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.040519 containerd[1857]: time="2025-02-13T20:15:15.040479292Z" level=info msg="RemovePodSandbox \"2c1d91f6d895eae2b3281e79a17f208a836ea376b1186957a2035ed2487bdc50\" returns successfully" Feb 13 20:15:15.042311 containerd[1857]: time="2025-02-13T20:15:15.042283058Z" level=info msg="StopPodSandbox for \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\"" Feb 13 20:15:15.042510 containerd[1857]: time="2025-02-13T20:15:15.042495818Z" level=info msg="TearDown network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" successfully" Feb 13 20:15:15.042574 containerd[1857]: time="2025-02-13T20:15:15.042563339Z" level=info msg="StopPodSandbox for \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" returns successfully" Feb 13 20:15:15.043018 containerd[1857]: time="2025-02-13T20:15:15.042996420Z" level=info msg="RemovePodSandbox for \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\"" Feb 13 20:15:15.043123 containerd[1857]: time="2025-02-13T20:15:15.043110820Z" level=info msg="Forcibly stopping sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\"" Feb 13 20:15:15.043244 containerd[1857]: time="2025-02-13T20:15:15.043231621Z" level=info msg="TearDown network for sandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" successfully" Feb 13 20:15:15.046464 containerd[1857]: time="2025-02-13T20:15:15.046422072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 516.892135ms" Feb 13 20:15:15.046618 containerd[1857]: time="2025-02-13T20:15:15.046604912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 20:15:15.047731 containerd[1857]: time="2025-02-13T20:15:15.047701156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:15:15.051004 containerd[1857]: time="2025-02-13T20:15:15.050855406Z" level=info msg="CreateContainer within sandbox \"e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:15:15.056370 containerd[1857]: time="2025-02-13T20:15:15.055872023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.056370 containerd[1857]: time="2025-02-13T20:15:15.056321185Z" level=info msg="RemovePodSandbox \"b87801fb2b8dae877e6b9e0ccb2a2a3e5c0ab068529ddcb7b89d58d2c2146ef6\" returns successfully" Feb 13 20:15:15.057080 containerd[1857]: time="2025-02-13T20:15:15.056945107Z" level=info msg="StopPodSandbox for \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\"" Feb 13 20:15:15.057509 containerd[1857]: time="2025-02-13T20:15:15.057304588Z" level=info msg="TearDown network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\" successfully" Feb 13 20:15:15.057509 containerd[1857]: time="2025-02-13T20:15:15.057321028Z" level=info msg="StopPodSandbox for \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\" returns successfully" Feb 13 20:15:15.058001 containerd[1857]: time="2025-02-13T20:15:15.057844870Z" level=info msg="RemovePodSandbox for \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\"" Feb 13 20:15:15.058001 containerd[1857]: time="2025-02-13T20:15:15.057879750Z" level=info msg="Forcibly stopping sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\"" Feb 13 20:15:15.058001 containerd[1857]: time="2025-02-13T20:15:15.057954950Z" level=info msg="TearDown network for sandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\" successfully" Feb 13 20:15:15.075434 containerd[1857]: time="2025-02-13T20:15:15.075258048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.075434 containerd[1857]: time="2025-02-13T20:15:15.075323409Z" level=info msg="RemovePodSandbox \"3bcc1defdb9f08f0f2bf66a6cc216ae4885427cd795d5a8cf1836b70f9758510\" returns successfully" Feb 13 20:15:15.076291 containerd[1857]: time="2025-02-13T20:15:15.076125891Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:15:15.076291 containerd[1857]: time="2025-02-13T20:15:15.076222412Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:15:15.076291 containerd[1857]: time="2025-02-13T20:15:15.076232372Z" level=info msg="StopPodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:15:15.077126 containerd[1857]: time="2025-02-13T20:15:15.076989014Z" level=info msg="RemovePodSandbox for \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:15:15.077126 containerd[1857]: time="2025-02-13T20:15:15.077015014Z" level=info msg="Forcibly stopping sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\"" Feb 13 20:15:15.077397 containerd[1857]: time="2025-02-13T20:15:15.077280055Z" level=info msg="TearDown network for sandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" successfully" Feb 13 20:15:15.091760 containerd[1857]: time="2025-02-13T20:15:15.091715304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.092292 containerd[1857]: time="2025-02-13T20:15:15.092120305Z" level=info msg="RemovePodSandbox \"50dff2d1f46c4c34f746bc01f8da1201c6c043e65d1cadde6e95a7a1d0643d0a\" returns successfully" Feb 13 20:15:15.093064 containerd[1857]: time="2025-02-13T20:15:15.092760987Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:15:15.093064 containerd[1857]: time="2025-02-13T20:15:15.092855827Z" level=info msg="TearDown network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" successfully" Feb 13 20:15:15.093064 containerd[1857]: time="2025-02-13T20:15:15.092866747Z" level=info msg="StopPodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" returns successfully" Feb 13 20:15:15.094458 containerd[1857]: time="2025-02-13T20:15:15.093368149Z" level=info msg="RemovePodSandbox for \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:15:15.094458 containerd[1857]: time="2025-02-13T20:15:15.093411989Z" level=info msg="Forcibly stopping sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\"" Feb 13 20:15:15.094458 containerd[1857]: time="2025-02-13T20:15:15.093485270Z" level=info msg="TearDown network for sandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" successfully" Feb 13 20:15:15.111400 containerd[1857]: time="2025-02-13T20:15:15.111234409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.111400 containerd[1857]: time="2025-02-13T20:15:15.111302089Z" level=info msg="RemovePodSandbox \"efaa584f89350f9cb2d99e20d227a0a191b3463baf8605d34cf94bc26fb3e345\" returns successfully" Feb 13 20:15:15.112202 containerd[1857]: time="2025-02-13T20:15:15.111916331Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" Feb 13 20:15:15.112202 containerd[1857]: time="2025-02-13T20:15:15.112019852Z" level=info msg="TearDown network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" successfully" Feb 13 20:15:15.112202 containerd[1857]: time="2025-02-13T20:15:15.112029452Z" level=info msg="StopPodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" returns successfully" Feb 13 20:15:15.113240 containerd[1857]: time="2025-02-13T20:15:15.112594374Z" level=info msg="RemovePodSandbox for \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" Feb 13 20:15:15.113240 containerd[1857]: time="2025-02-13T20:15:15.112628774Z" level=info msg="Forcibly stopping sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\"" Feb 13 20:15:15.113240 containerd[1857]: time="2025-02-13T20:15:15.112717334Z" level=info msg="TearDown network for sandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" successfully" Feb 13 20:15:15.125306 containerd[1857]: time="2025-02-13T20:15:15.125263456Z" level=info msg="CreateContainer within sandbox \"e56de1fdabb824f2e6c92e6b6cc7d170ed209fd905064a53405629a40f5c66b1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a1d0d2e612da3aa9adc536e4bc266728923f7ce47a89975e7c1c5a404a45797\"" Feb 13 20:15:15.127094 containerd[1857]: time="2025-02-13T20:15:15.127055782Z" level=info msg="StartContainer for \"7a1d0d2e612da3aa9adc536e4bc266728923f7ce47a89975e7c1c5a404a45797\"" Feb 13 20:15:15.135926 containerd[1857]: time="2025-02-13T20:15:15.135717211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.135926 containerd[1857]: time="2025-02-13T20:15:15.135806412Z" level=info msg="RemovePodSandbox \"a78edaed757126486d9b5d6a197ed1916231f7f2e2f5311b85a95ac8e0da6501\" returns successfully" Feb 13 20:15:15.136904 containerd[1857]: time="2025-02-13T20:15:15.136735135Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\"" Feb 13 20:15:15.136904 containerd[1857]: time="2025-02-13T20:15:15.136838895Z" level=info msg="TearDown network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" successfully" Feb 13 20:15:15.136904 containerd[1857]: time="2025-02-13T20:15:15.136849695Z" level=info msg="StopPodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" returns successfully" Feb 13 20:15:15.138013 containerd[1857]: time="2025-02-13T20:15:15.137837418Z" level=info msg="RemovePodSandbox for \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\"" Feb 13 20:15:15.138013 containerd[1857]: time="2025-02-13T20:15:15.137867018Z" level=info msg="Forcibly stopping sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\"" Feb 13 20:15:15.138013 containerd[1857]: time="2025-02-13T20:15:15.137940019Z" level=info msg="TearDown network for sandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" successfully" Feb 13 20:15:15.151142 containerd[1857]: time="2025-02-13T20:15:15.151019183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.151818 containerd[1857]: time="2025-02-13T20:15:15.151716145Z" level=info msg="RemovePodSandbox \"b032b4b8928ecca4207ad6639b2f513d9da0d60da5105eedcef21813aa80fc0a\" returns successfully" Feb 13 20:15:15.152938 containerd[1857]: time="2025-02-13T20:15:15.152779228Z" level=info msg="StopPodSandbox for \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\"" Feb 13 20:15:15.152938 containerd[1857]: time="2025-02-13T20:15:15.152876349Z" level=info msg="TearDown network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" successfully" Feb 13 20:15:15.152938 containerd[1857]: time="2025-02-13T20:15:15.152886229Z" level=info msg="StopPodSandbox for \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" returns successfully" Feb 13 20:15:15.153708 containerd[1857]: time="2025-02-13T20:15:15.153313030Z" level=info msg="RemovePodSandbox for \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\"" Feb 13 20:15:15.153708 containerd[1857]: time="2025-02-13T20:15:15.153339350Z" level=info msg="Forcibly stopping sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\"" Feb 13 20:15:15.153708 containerd[1857]: time="2025-02-13T20:15:15.153397911Z" level=info msg="TearDown network for sandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" successfully" Feb 13 20:15:15.167553 containerd[1857]: time="2025-02-13T20:15:15.167512398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.168044 containerd[1857]: time="2025-02-13T20:15:15.167743479Z" level=info msg="RemovePodSandbox \"14cf92685bf19204439efff98778801f9cc28c3a823fa2b7c44cb097cf8f917d\" returns successfully" Feb 13 20:15:15.168397 containerd[1857]: time="2025-02-13T20:15:15.168246960Z" level=info msg="StopPodSandbox for \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\"" Feb 13 20:15:15.168397 containerd[1857]: time="2025-02-13T20:15:15.168342161Z" level=info msg="TearDown network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\" successfully" Feb 13 20:15:15.168397 containerd[1857]: time="2025-02-13T20:15:15.168351761Z" level=info msg="StopPodSandbox for \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\" returns successfully" Feb 13 20:15:15.168825 containerd[1857]: time="2025-02-13T20:15:15.168760042Z" level=info msg="RemovePodSandbox for \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\"" Feb 13 20:15:15.168825 containerd[1857]: time="2025-02-13T20:15:15.168786922Z" level=info msg="Forcibly stopping sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\"" Feb 13 20:15:15.168935 containerd[1857]: time="2025-02-13T20:15:15.168846962Z" level=info msg="TearDown network for sandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\" successfully" Feb 13 20:15:15.192752 containerd[1857]: time="2025-02-13T20:15:15.192694602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.192928 containerd[1857]: time="2025-02-13T20:15:15.192773363Z" level=info msg="RemovePodSandbox \"dbd23e779c0d4c0d1c42119aedd80bbcd8ff26754d315d04cb73f92fd32cf4aa\" returns successfully" Feb 13 20:15:15.193650 containerd[1857]: time="2025-02-13T20:15:15.193512245Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:15:15.194761 containerd[1857]: time="2025-02-13T20:15:15.194731369Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:15:15.194826 containerd[1857]: time="2025-02-13T20:15:15.194760169Z" level=info msg="StopPodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:15:15.196444 containerd[1857]: time="2025-02-13T20:15:15.195160611Z" level=info msg="RemovePodSandbox for \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:15:15.196444 containerd[1857]: time="2025-02-13T20:15:15.195189371Z" level=info msg="Forcibly stopping sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\"" Feb 13 20:15:15.196444 containerd[1857]: time="2025-02-13T20:15:15.195258091Z" level=info msg="TearDown network for sandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" successfully" Feb 13 20:15:15.206477 containerd[1857]: time="2025-02-13T20:15:15.206415569Z" level=info msg="StartContainer for \"7a1d0d2e612da3aa9adc536e4bc266728923f7ce47a89975e7c1c5a404a45797\" returns successfully" Feb 13 20:15:15.210306 containerd[1857]: time="2025-02-13T20:15:15.209609499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.210456 containerd[1857]: time="2025-02-13T20:15:15.210336622Z" level=info msg="RemovePodSandbox \"e12a4b11d0ea138d538fc34b0e270cff8284c61814a3dc4ec597fa9f03ac2181\" returns successfully" Feb 13 20:15:15.212186 containerd[1857]: time="2025-02-13T20:15:15.212106988Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:15:15.212299 containerd[1857]: time="2025-02-13T20:15:15.212222068Z" level=info msg="TearDown network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" successfully" Feb 13 20:15:15.212299 containerd[1857]: time="2025-02-13T20:15:15.212232548Z" level=info msg="StopPodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" returns successfully" Feb 13 20:15:15.214891 containerd[1857]: time="2025-02-13T20:15:15.213949194Z" level=info msg="RemovePodSandbox for \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:15:15.214891 containerd[1857]: time="2025-02-13T20:15:15.213985874Z" level=info msg="Forcibly stopping sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\"" Feb 13 20:15:15.214891 containerd[1857]: time="2025-02-13T20:15:15.214078154Z" level=info msg="TearDown network for sandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" successfully" Feb 13 20:15:15.226542 containerd[1857]: time="2025-02-13T20:15:15.226493916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.226779 containerd[1857]: time="2025-02-13T20:15:15.226751517Z" level=info msg="RemovePodSandbox \"b5565aae7e96b3238a435b254c5c6289a064f866e819a6402286a5c17b0da4b8\" returns successfully" Feb 13 20:15:15.230147 containerd[1857]: time="2025-02-13T20:15:15.230111128Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" Feb 13 20:15:15.230432 containerd[1857]: time="2025-02-13T20:15:15.230415889Z" level=info msg="TearDown network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" successfully" Feb 13 20:15:15.230497 containerd[1857]: time="2025-02-13T20:15:15.230485809Z" level=info msg="StopPodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" returns successfully" Feb 13 20:15:15.230968 containerd[1857]: time="2025-02-13T20:15:15.230943171Z" level=info msg="RemovePodSandbox for \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" Feb 13 20:15:15.231094 containerd[1857]: time="2025-02-13T20:15:15.231079091Z" level=info msg="Forcibly stopping sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\"" Feb 13 20:15:15.231245 containerd[1857]: time="2025-02-13T20:15:15.231221812Z" level=info msg="TearDown network for sandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" successfully" Feb 13 20:15:15.243805 containerd[1857]: time="2025-02-13T20:15:15.243762814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.244018 containerd[1857]: time="2025-02-13T20:15:15.244002775Z" level=info msg="RemovePodSandbox \"55f8f1173410cdc8aed6482de2415ddec61a8f8bda7246949acf1a75093f08a3\" returns successfully" Feb 13 20:15:15.244709 containerd[1857]: time="2025-02-13T20:15:15.244619857Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\"" Feb 13 20:15:15.244924 containerd[1857]: time="2025-02-13T20:15:15.244730737Z" level=info msg="TearDown network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" successfully" Feb 13 20:15:15.244924 containerd[1857]: time="2025-02-13T20:15:15.244742697Z" level=info msg="StopPodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" returns successfully" Feb 13 20:15:15.246295 containerd[1857]: time="2025-02-13T20:15:15.245094298Z" level=info msg="RemovePodSandbox for \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\"" Feb 13 20:15:15.246295 containerd[1857]: time="2025-02-13T20:15:15.245118938Z" level=info msg="Forcibly stopping sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\"" Feb 13 20:15:15.246295 containerd[1857]: time="2025-02-13T20:15:15.245181379Z" level=info msg="TearDown network for sandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" successfully" Feb 13 20:15:15.258198 containerd[1857]: time="2025-02-13T20:15:15.258139902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.258333 containerd[1857]: time="2025-02-13T20:15:15.258214142Z" level=info msg="RemovePodSandbox \"9d30b2deafa4e3499950c2376c83ce9b60be7c5d94c77d13e8b564ad195b4243\" returns successfully" Feb 13 20:15:15.258736 containerd[1857]: time="2025-02-13T20:15:15.258706384Z" level=info msg="StopPodSandbox for \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\"" Feb 13 20:15:15.259036 containerd[1857]: time="2025-02-13T20:15:15.259013025Z" level=info msg="TearDown network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" successfully" Feb 13 20:15:15.259036 containerd[1857]: time="2025-02-13T20:15:15.259035225Z" level=info msg="StopPodSandbox for \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" returns successfully" Feb 13 20:15:15.260083 containerd[1857]: time="2025-02-13T20:15:15.260047589Z" level=info msg="RemovePodSandbox for \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\"" Feb 13 20:15:15.260083 containerd[1857]: time="2025-02-13T20:15:15.260084149Z" level=info msg="Forcibly stopping sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\"" Feb 13 20:15:15.260244 containerd[1857]: time="2025-02-13T20:15:15.260199029Z" level=info msg="TearDown network for sandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" successfully" Feb 13 20:15:15.276755 containerd[1857]: time="2025-02-13T20:15:15.276712804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.277109 containerd[1857]: time="2025-02-13T20:15:15.276781285Z" level=info msg="RemovePodSandbox \"1aeb86bb670d91355772e6a68363984df89c88555bb0ca18c1bb1d876b7ef014\" returns successfully" Feb 13 20:15:15.278457 containerd[1857]: time="2025-02-13T20:15:15.278420770Z" level=info msg="StopPodSandbox for \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\"" Feb 13 20:15:15.278552 containerd[1857]: time="2025-02-13T20:15:15.278527571Z" level=info msg="TearDown network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\" successfully" Feb 13 20:15:15.278552 containerd[1857]: time="2025-02-13T20:15:15.278537731Z" level=info msg="StopPodSandbox for \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\" returns successfully" Feb 13 20:15:15.282267 containerd[1857]: time="2025-02-13T20:15:15.282229263Z" level=info msg="RemovePodSandbox for \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\"" Feb 13 20:15:15.282399 containerd[1857]: time="2025-02-13T20:15:15.282275583Z" level=info msg="Forcibly stopping sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\"" Feb 13 20:15:15.282399 containerd[1857]: time="2025-02-13T20:15:15.282366903Z" level=info msg="TearDown network for sandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\" successfully" Feb 13 20:15:15.294940 containerd[1857]: time="2025-02-13T20:15:15.294765185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.294940 containerd[1857]: time="2025-02-13T20:15:15.294815665Z" level=info msg="RemovePodSandbox \"d1cc8b888422f1f70fd3c43815be873874d16e708c991fddd5f450ba14f622b8\" returns successfully" Feb 13 20:15:15.295343 containerd[1857]: time="2025-02-13T20:15:15.295199186Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:15:15.295343 containerd[1857]: time="2025-02-13T20:15:15.295292107Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:15:15.295343 containerd[1857]: time="2025-02-13T20:15:15.295301947Z" level=info msg="StopPodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:15:15.295761 containerd[1857]: time="2025-02-13T20:15:15.295741068Z" level=info msg="RemovePodSandbox for \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:15:15.295871 containerd[1857]: time="2025-02-13T20:15:15.295842349Z" level=info msg="Forcibly stopping sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\"" Feb 13 20:15:15.296036 containerd[1857]: time="2025-02-13T20:15:15.295985949Z" level=info msg="TearDown network for sandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" successfully" Feb 13 20:15:15.302382 kubelet[3630]: I0213 20:15:15.301866 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7fsxn" podStartSLOduration=45.301846849 podStartE2EDuration="45.301846849s" podCreationTimestamp="2025-02-13 20:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:10.28875643 +0000 UTC m=+55.637762935" watchObservedRunningTime="2025-02-13 20:15:15.301846849 +0000 UTC m=+60.650853354" Feb 13 20:15:15.305219 kubelet[3630]: I0213 20:15:15.305158 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-snvwz" podStartSLOduration=32.158360485 podStartE2EDuration="38.30512982s" podCreationTimestamp="2025-02-13 20:14:37 +0000 UTC" firstStartedPulling="2025-02-13 20:15:08.382561561 +0000 UTC m=+53.731568066" lastFinishedPulling="2025-02-13 20:15:14.529330896 +0000 UTC m=+59.878337401" observedRunningTime="2025-02-13 20:15:15.300884206 +0000 UTC m=+60.649890711" watchObservedRunningTime="2025-02-13 20:15:15.30512982 +0000 UTC m=+60.654136325" Feb 13 20:15:15.308686 containerd[1857]: time="2025-02-13T20:15:15.307414827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.308686 containerd[1857]: time="2025-02-13T20:15:15.307513468Z" level=info msg="RemovePodSandbox \"9eb263a6e3623f2ba7fdf55c7e175d6c657a02bd61b82507ce856a0b06cfdfd8\" returns successfully" Feb 13 20:15:15.309014 containerd[1857]: time="2025-02-13T20:15:15.308991193Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:15:15.309189 containerd[1857]: time="2025-02-13T20:15:15.309174233Z" level=info msg="TearDown network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" successfully" Feb 13 20:15:15.309404 containerd[1857]: time="2025-02-13T20:15:15.309239594Z" level=info msg="StopPodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" returns successfully" Feb 13 20:15:15.309730 containerd[1857]: time="2025-02-13T20:15:15.309699355Z" level=info msg="RemovePodSandbox for \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:15:15.309795 containerd[1857]: time="2025-02-13T20:15:15.309732035Z" level=info msg="Forcibly stopping sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\"" Feb 13 20:15:15.309858 containerd[1857]: time="2025-02-13T20:15:15.309839356Z" level=info msg="TearDown network for sandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" successfully" Feb 13 20:15:15.322424 containerd[1857]: time="2025-02-13T20:15:15.322363718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.322559 containerd[1857]: time="2025-02-13T20:15:15.322436838Z" level=info msg="RemovePodSandbox \"64291f817977d70e0b3e6aeac24aeb0232bd92c225eb3a504fa988de40540a83\" returns successfully" Feb 13 20:15:15.322984 containerd[1857]: time="2025-02-13T20:15:15.322963720Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" Feb 13 20:15:15.323274 containerd[1857]: time="2025-02-13T20:15:15.323194720Z" level=info msg="TearDown network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" successfully" Feb 13 20:15:15.323274 containerd[1857]: time="2025-02-13T20:15:15.323209520Z" level=info msg="StopPodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" returns successfully" Feb 13 20:15:15.323890 containerd[1857]: time="2025-02-13T20:15:15.323487601Z" level=info msg="RemovePodSandbox for \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" Feb 13 20:15:15.323890 containerd[1857]: time="2025-02-13T20:15:15.323512321Z" level=info msg="Forcibly stopping sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\"" Feb 13 20:15:15.323890 containerd[1857]: time="2025-02-13T20:15:15.323574082Z" level=info msg="TearDown network for sandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" successfully" Feb 13 20:15:15.334748 containerd[1857]: time="2025-02-13T20:15:15.334698399Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.334917 containerd[1857]: time="2025-02-13T20:15:15.334774279Z" level=info msg="RemovePodSandbox \"4a7447b6ca144b0c5c47e797b65ca2384cbcf1d03131b6f9d48d683f7fb2d4c6\" returns successfully" Feb 13 20:15:15.338924 containerd[1857]: time="2025-02-13T20:15:15.336153164Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\"" Feb 13 20:15:15.340300 containerd[1857]: time="2025-02-13T20:15:15.339530135Z" level=info msg="TearDown network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" successfully" Feb 13 20:15:15.340300 containerd[1857]: time="2025-02-13T20:15:15.339560655Z" level=info msg="StopPodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" returns successfully" Feb 13 20:15:15.345832 containerd[1857]: time="2025-02-13T20:15:15.343498749Z" level=info msg="RemovePodSandbox for \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\"" Feb 13 20:15:15.346236 containerd[1857]: time="2025-02-13T20:15:15.346029557Z" level=info msg="Forcibly stopping sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\"" Feb 13 20:15:15.346236 containerd[1857]: time="2025-02-13T20:15:15.346177878Z" level=info msg="TearDown network for sandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" successfully" Feb 13 20:15:15.363852 containerd[1857]: time="2025-02-13T20:15:15.363620176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.363852 containerd[1857]: time="2025-02-13T20:15:15.363705176Z" level=info msg="RemovePodSandbox \"2a86982dc16d2a8919e465062b887f875b228a587ff72f214d9d4910e8ce00c9\" returns successfully" Feb 13 20:15:15.365511 containerd[1857]: time="2025-02-13T20:15:15.365050021Z" level=info msg="StopPodSandbox for \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\"" Feb 13 20:15:15.365511 containerd[1857]: time="2025-02-13T20:15:15.365172741Z" level=info msg="TearDown network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" successfully" Feb 13 20:15:15.365511 containerd[1857]: time="2025-02-13T20:15:15.365183421Z" level=info msg="StopPodSandbox for \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" returns successfully" Feb 13 20:15:15.371612 containerd[1857]: time="2025-02-13T20:15:15.366595866Z" level=info msg="RemovePodSandbox for \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\"" Feb 13 20:15:15.371612 containerd[1857]: time="2025-02-13T20:15:15.366630466Z" level=info msg="Forcibly stopping sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\"" Feb 13 20:15:15.371612 containerd[1857]: time="2025-02-13T20:15:15.366731867Z" level=info msg="TearDown network for sandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" successfully" Feb 13 20:15:15.387111 containerd[1857]: time="2025-02-13T20:15:15.386974614Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.387793 containerd[1857]: time="2025-02-13T20:15:15.387726057Z" level=info msg="RemovePodSandbox \"9f15e8600f7dbbd8bfbff6a3659e33fcd0d1097ddc712a0d0c5cf1e1a259a849\" returns successfully" Feb 13 20:15:15.390252 containerd[1857]: time="2025-02-13T20:15:15.390220745Z" level=info msg="StopPodSandbox for \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\"" Feb 13 20:15:15.390680 containerd[1857]: time="2025-02-13T20:15:15.390662907Z" level=info msg="TearDown network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\" successfully" Feb 13 20:15:15.390776 containerd[1857]: time="2025-02-13T20:15:15.390763987Z" level=info msg="StopPodSandbox for \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\" returns successfully" Feb 13 20:15:15.393065 containerd[1857]: time="2025-02-13T20:15:15.393031475Z" level=info msg="RemovePodSandbox for \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\"" Feb 13 20:15:15.393296 containerd[1857]: time="2025-02-13T20:15:15.393278356Z" level=info msg="Forcibly stopping sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\"" Feb 13 20:15:15.393472 containerd[1857]: time="2025-02-13T20:15:15.393457356Z" level=info msg="TearDown network for sandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\" successfully" Feb 13 20:15:15.407806 containerd[1857]: time="2025-02-13T20:15:15.407751764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:15.408384 containerd[1857]: time="2025-02-13T20:15:15.408072965Z" level=info msg="RemovePodSandbox \"a4331bf87d9d83061ae4a388887774a7f58bad97471c113d8853a64061c26a99\" returns successfully" Feb 13 20:15:16.331315 kubelet[3630]: I0213 20:15:16.330711 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dff5cb7c5-sdjn4" podStartSLOduration=33.096094621 podStartE2EDuration="39.330691462s" podCreationTimestamp="2025-02-13 20:14:37 +0000 UTC" firstStartedPulling="2025-02-13 20:15:08.812926634 +0000 UTC m=+54.161933139" lastFinishedPulling="2025-02-13 20:15:15.047523435 +0000 UTC m=+60.396529980" observedRunningTime="2025-02-13 20:15:15.371803604 +0000 UTC m=+60.720810109" watchObservedRunningTime="2025-02-13 20:15:16.330691462 +0000 UTC m=+61.679697967" Feb 13 20:15:17.421914 systemd[1]: run-containerd-runc-k8s.io-a6832c0dc5bae55f87f34306774a96053c37034792bf053ba77db3700e0bce38-runc.UY2d31.mount: Deactivated successfully. Feb 13 20:15:17.588896 containerd[1857]: time="2025-02-13T20:15:17.588841570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:17.593727 containerd[1857]: time="2025-02-13T20:15:17.593665425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 20:15:17.599405 containerd[1857]: time="2025-02-13T20:15:17.599321643Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:17.607130 containerd[1857]: time="2025-02-13T20:15:17.607087187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:17.608000 containerd[1857]: time="2025-02-13T20:15:17.607885310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.557773706s" Feb 13 20:15:17.608000 containerd[1857]: time="2025-02-13T20:15:17.607915990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 20:15:17.609810 containerd[1857]: time="2025-02-13T20:15:17.609788076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:15:17.624787 containerd[1857]: time="2025-02-13T20:15:17.624706922Z" level=info msg="CreateContainer within sandbox \"927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:15:17.672359 containerd[1857]: time="2025-02-13T20:15:17.672312951Z" level=info msg="CreateContainer within sandbox \"927792f25702ff6bf36a9cbb67015fac6cd55e2f88f12895dfa1f3668d1b5b93\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b29da7eec93bfad99d3447ac8639891b40aeaf58b6e6274a4437e1f1d5a1ebf6\"" Feb 13 20:15:17.673044 containerd[1857]: time="2025-02-13T20:15:17.673015474Z" level=info msg="StartContainer for \"b29da7eec93bfad99d3447ac8639891b40aeaf58b6e6274a4437e1f1d5a1ebf6\"" Feb 13 20:15:17.733317 containerd[1857]: time="2025-02-13T20:15:17.733193942Z" level=info msg="StartContainer for \"b29da7eec93bfad99d3447ac8639891b40aeaf58b6e6274a4437e1f1d5a1ebf6\" returns successfully" Feb 13 20:15:18.330169 kubelet[3630]: I0213 20:15:18.329666 3630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c647b4f5b-k5qm9" podStartSLOduration=32.781979051 podStartE2EDuration="41.329543968s" podCreationTimestamp="2025-02-13 20:14:37 +0000 UTC" firstStartedPulling="2025-02-13 20:15:09.061385476 +0000 UTC m=+54.410391981" lastFinishedPulling="2025-02-13 20:15:17.608950433 +0000 UTC m=+62.957956898" observedRunningTime="2025-02-13 20:15:18.328746645 +0000 UTC m=+63.677753110" watchObservedRunningTime="2025-02-13 20:15:18.329543968 +0000 UTC m=+63.678550433" Feb 13 20:15:18.412621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270440087.mount: Deactivated successfully. Feb 13 20:15:19.057455 containerd[1857]: time="2025-02-13T20:15:19.056855843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:19.059443 containerd[1857]: time="2025-02-13T20:15:19.059383571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 20:15:19.064337 containerd[1857]: time="2025-02-13T20:15:19.064283466Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:19.070992 containerd[1857]: time="2025-02-13T20:15:19.070916807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:19.071668 containerd[1857]: time="2025-02-13T20:15:19.071533009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.461531893s" Feb 13 20:15:19.071668 containerd[1857]: time="2025-02-13T20:15:19.071566409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 20:15:19.075440 containerd[1857]: time="2025-02-13T20:15:19.075396381Z" level=info msg="CreateContainer within sandbox \"0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:15:19.128088 containerd[1857]: time="2025-02-13T20:15:19.128046626Z" level=info msg="CreateContainer within sandbox \"0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fae2136bb5bdbe738dd798480615669c1385ad4eaa8a5f802a3a08f885c71489\"" Feb 13 20:15:19.129420 containerd[1857]: time="2025-02-13T20:15:19.129365230Z" level=info msg="StartContainer for \"fae2136bb5bdbe738dd798480615669c1385ad4eaa8a5f802a3a08f885c71489\"" Feb 13 20:15:19.219244 containerd[1857]: time="2025-02-13T20:15:19.219174511Z" level=info msg="StartContainer for \"fae2136bb5bdbe738dd798480615669c1385ad4eaa8a5f802a3a08f885c71489\" returns successfully" Feb 13 20:15:19.220989 containerd[1857]: time="2025-02-13T20:15:19.220898516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:15:21.011455 containerd[1857]: time="2025-02-13T20:15:21.010743116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:21.015364 containerd[1857]: time="2025-02-13T20:15:21.015311330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 20:15:21.020521 containerd[1857]: time="2025-02-13T20:15:21.020457186Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:21.026712 containerd[1857]: time="2025-02-13T20:15:21.026624726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:21.028116 containerd[1857]: time="2025-02-13T20:15:21.027775769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.806843693s" Feb 13 20:15:21.028116 containerd[1857]: time="2025-02-13T20:15:21.027813369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 20:15:21.030302 containerd[1857]: time="2025-02-13T20:15:21.030239537Z" level=info msg="CreateContainer within sandbox \"0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:15:21.085632 containerd[1857]: time="2025-02-13T20:15:21.085531350Z" level=info msg="CreateContainer within sandbox \"0c61e0bd18d09f84f7d8a55f1945c3771cafa78e59172e31bb0cf65f9aae2a58\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c54149ee8945afb42b13e712cff70a5704cc4bfe035dd60f353312b94047001f\"" Feb 13 20:15:21.086328 containerd[1857]: time="2025-02-13T20:15:21.086285312Z" level=info msg="StartContainer for \"c54149ee8945afb42b13e712cff70a5704cc4bfe035dd60f353312b94047001f\"" Feb 13 20:15:21.152872 containerd[1857]: time="2025-02-13T20:15:21.152728800Z" level=info msg="StartContainer for \"c54149ee8945afb42b13e712cff70a5704cc4bfe035dd60f353312b94047001f\" returns successfully" Feb 13 20:15:21.905548 kubelet[3630]: I0213 20:15:21.905104 3630 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:15:21.905548 kubelet[3630]: I0213 20:15:21.905148 3630 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:15:38.805906 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:51886.service - OpenSSH per-connection server daemon (10.200.16.10:51886). Feb 13 20:15:39.292970 sshd[6647]: Accepted publickey for core from 10.200.16.10 port 51886 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:15:39.295387 sshd-session[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:39.300725 systemd-logind[1798]: New session 10 of user core. Feb 13 20:15:39.308020 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:15:39.710649 sshd[6652]: Connection closed by 10.200.16.10 port 51886 Feb 13 20:15:39.711245 sshd-session[6647]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:39.715312 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:51886.service: Deactivated successfully. Feb 13 20:15:39.717870 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:15:39.718902 systemd-logind[1798]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:15:39.720299 systemd-logind[1798]: Removed session 10. Feb 13 20:15:44.788955 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:55252.service - OpenSSH per-connection server daemon (10.200.16.10:55252). Feb 13 20:15:45.233148 sshd[6670]: Accepted publickey for core from 10.200.16.10 port 55252 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:15:45.234567 sshd-session[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:45.238800 systemd-logind[1798]: New session 11 of user core. Feb 13 20:15:45.243967 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:15:45.626221 sshd[6673]: Connection closed by 10.200.16.10 port 55252 Feb 13 20:15:45.626857 sshd-session[6670]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:45.631573 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:55252.service: Deactivated successfully. Feb 13 20:15:45.634569 systemd-logind[1798]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:15:45.635607 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:15:45.637046 systemd-logind[1798]: Removed session 11. Feb 13 20:15:50.704923 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:49996.service - OpenSSH per-connection server daemon (10.200.16.10:49996). Feb 13 20:15:51.156191 sshd[6730]: Accepted publickey for core from 10.200.16.10 port 49996 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:15:51.158001 sshd-session[6730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:51.161972 systemd-logind[1798]: New session 12 of user core. Feb 13 20:15:51.169036 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:15:51.558622 sshd[6733]: Connection closed by 10.200.16.10 port 49996 Feb 13 20:15:51.559178 sshd-session[6730]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:51.562925 systemd-logind[1798]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:15:51.563429 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:49996.service: Deactivated successfully. Feb 13 20:15:51.566249 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:15:51.568252 systemd-logind[1798]: Removed session 12. Feb 13 20:15:51.644889 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:50000.service - OpenSSH per-connection server daemon (10.200.16.10:50000). Feb 13 20:15:52.127358 sshd[6745]: Accepted publickey for core from 10.200.16.10 port 50000 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:15:52.128943 sshd-session[6745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:52.133548 systemd-logind[1798]: New session 13 of user core. Feb 13 20:15:52.138969 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:15:52.574802 sshd[6748]: Connection closed by 10.200.16.10 port 50000 Feb 13 20:15:52.575391 sshd-session[6745]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:52.580987 systemd-logind[1798]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:15:52.581151 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:50000.service: Deactivated successfully. Feb 13 20:15:52.587304 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:15:52.589375 systemd-logind[1798]: Removed session 13. Feb 13 20:15:52.671926 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:50004.service - OpenSSH per-connection server daemon (10.200.16.10:50004). Feb 13 20:15:53.154224 sshd[6757]: Accepted publickey for core from 10.200.16.10 port 50004 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:15:53.155597 sshd-session[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:53.159754 systemd-logind[1798]: New session 14 of user core. Feb 13 20:15:53.164061 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:15:53.564670 sshd[6760]: Connection closed by 10.200.16.10 port 50004 Feb 13 20:15:53.565167 sshd-session[6757]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:53.568053 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:50004.service: Deactivated successfully. Feb 13 20:15:53.571619 systemd-logind[1798]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:15:53.573508 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:15:53.574721 systemd-logind[1798]: Removed session 14. Feb 13 20:15:58.641915 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:50020.service - OpenSSH per-connection server daemon (10.200.16.10:50020). Feb 13 20:15:59.089486 sshd[6775]: Accepted publickey for core from 10.200.16.10 port 50020 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:15:59.091003 sshd-session[6775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:59.095535 systemd-logind[1798]: New session 15 of user core. Feb 13 20:15:59.099053 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:15:59.480005 sshd[6778]: Connection closed by 10.200.16.10 port 50020 Feb 13 20:15:59.480956 sshd-session[6775]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:59.484363 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:50020.service: Deactivated successfully. Feb 13 20:15:59.488402 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:15:59.489584 systemd-logind[1798]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:15:59.490867 systemd-logind[1798]: Removed session 15. Feb 13 20:16:04.560906 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:56128.service - OpenSSH per-connection server daemon (10.200.16.10:56128). Feb 13 20:16:05.006594 sshd[6790]: Accepted publickey for core from 10.200.16.10 port 56128 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:05.008211 sshd-session[6790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:05.012358 systemd-logind[1798]: New session 16 of user core. Feb 13 20:16:05.017013 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:16:05.423303 sshd[6793]: Connection closed by 10.200.16.10 port 56128 Feb 13 20:16:05.423999 sshd-session[6790]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:05.428538 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:56128.service: Deactivated successfully. Feb 13 20:16:05.429325 systemd-logind[1798]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:16:05.432036 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:16:05.433245 systemd-logind[1798]: Removed session 16. Feb 13 20:16:05.498928 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:56134.service - OpenSSH per-connection server daemon (10.200.16.10:56134). Feb 13 20:16:05.944980 sshd[6804]: Accepted publickey for core from 10.200.16.10 port 56134 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:05.946388 sshd-session[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:05.950271 systemd-logind[1798]: New session 17 of user core. Feb 13 20:16:05.957015 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:16:06.492150 sshd[6807]: Connection closed by 10.200.16.10 port 56134 Feb 13 20:16:06.492739 sshd-session[6804]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:06.497243 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:56134.service: Deactivated successfully. Feb 13 20:16:06.499784 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:16:06.499952 systemd-logind[1798]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:16:06.502408 systemd-logind[1798]: Removed session 17. Feb 13 20:16:06.569902 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:56140.service - OpenSSH per-connection server daemon (10.200.16.10:56140). Feb 13 20:16:07.017041 sshd[6816]: Accepted publickey for core from 10.200.16.10 port 56140 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:07.018461 sshd-session[6816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:07.023604 systemd-logind[1798]: New session 18 of user core. Feb 13 20:16:07.025987 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:16:09.255502 sshd[6819]: Connection closed by 10.200.16.10 port 56140 Feb 13 20:16:09.255570 sshd-session[6816]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:09.260280 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:56140.service: Deactivated successfully. Feb 13 20:16:09.260930 systemd-logind[1798]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:16:09.265047 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:16:09.266955 systemd-logind[1798]: Removed session 18. Feb 13 20:16:09.340960 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:55160.service - OpenSSH per-connection server daemon (10.200.16.10:55160). Feb 13 20:16:09.830890 sshd[6856]: Accepted publickey for core from 10.200.16.10 port 55160 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:09.832778 sshd-session[6856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:09.837411 systemd-logind[1798]: New session 19 of user core. Feb 13 20:16:09.840911 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:16:10.356634 sshd[6859]: Connection closed by 10.200.16.10 port 55160 Feb 13 20:16:10.358856 sshd-session[6856]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:10.362651 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:55160.service: Deactivated successfully. Feb 13 20:16:10.366680 systemd-logind[1798]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:16:10.367665 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:16:10.369286 systemd-logind[1798]: Removed session 19. Feb 13 20:16:10.441078 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:55164.service - OpenSSH per-connection server daemon (10.200.16.10:55164). Feb 13 20:16:10.924591 sshd[6868]: Accepted publickey for core from 10.200.16.10 port 55164 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:10.925472 sshd-session[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:10.935267 systemd-logind[1798]: New session 20 of user core. Feb 13 20:16:10.941288 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:16:11.369138 sshd[6871]: Connection closed by 10.200.16.10 port 55164 Feb 13 20:16:11.370925 sshd-session[6868]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:11.373974 systemd-logind[1798]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:16:11.374826 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:55164.service: Deactivated successfully. Feb 13 20:16:11.381331 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:16:11.383287 systemd-logind[1798]: Removed session 20. Feb 13 20:16:16.452949 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:55178.service - OpenSSH per-connection server daemon (10.200.16.10:55178). Feb 13 20:16:16.937122 sshd[6886]: Accepted publickey for core from 10.200.16.10 port 55178 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:16.938069 sshd-session[6886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:16.942302 systemd-logind[1798]: New session 21 of user core. Feb 13 20:16:16.949203 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:16:17.338680 sshd[6889]: Connection closed by 10.200.16.10 port 55178 Feb 13 20:16:17.339266 sshd-session[6886]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:17.343982 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:55178.service: Deactivated successfully. Feb 13 20:16:17.344329 systemd-logind[1798]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:16:17.348343 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:16:17.350786 systemd-logind[1798]: Removed session 21. Feb 13 20:16:22.423365 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:60372.service - OpenSSH per-connection server daemon (10.200.16.10:60372). Feb 13 20:16:22.905817 sshd[6942]: Accepted publickey for core from 10.200.16.10 port 60372 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:22.907578 sshd-session[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:22.911819 systemd-logind[1798]: New session 22 of user core. Feb 13 20:16:22.919138 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:16:23.314584 sshd[6945]: Connection closed by 10.200.16.10 port 60372 Feb 13 20:16:23.315245 sshd-session[6942]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:23.319862 systemd-logind[1798]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:16:23.321303 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:60372.service: Deactivated successfully. Feb 13 20:16:23.325124 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:16:23.327404 systemd-logind[1798]: Removed session 22. Feb 13 20:16:28.401061 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:60384.service - OpenSSH per-connection server daemon (10.200.16.10:60384). Feb 13 20:16:28.893278 sshd[6962]: Accepted publickey for core from 10.200.16.10 port 60384 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:28.894850 sshd-session[6962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:28.899754 systemd-logind[1798]: New session 23 of user core. Feb 13 20:16:28.907999 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:16:29.294795 sshd[6965]: Connection closed by 10.200.16.10 port 60384 Feb 13 20:16:29.295345 sshd-session[6962]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:29.298394 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:60384.service: Deactivated successfully. Feb 13 20:16:29.302291 systemd-logind[1798]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:16:29.302941 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:16:29.305129 systemd-logind[1798]: Removed session 23. Feb 13 20:16:34.381258 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:50664.service - OpenSSH per-connection server daemon (10.200.16.10:50664). Feb 13 20:16:34.876226 sshd[6997]: Accepted publickey for core from 10.200.16.10 port 50664 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:34.879329 sshd-session[6997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:34.887959 systemd-logind[1798]: New session 24 of user core. Feb 13 20:16:34.893086 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:16:35.285676 sshd[7000]: Connection closed by 10.200.16.10 port 50664 Feb 13 20:16:35.285481 sshd-session[6997]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:35.291867 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:50664.service: Deactivated successfully. Feb 13 20:16:35.299253 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:16:35.300763 systemd-logind[1798]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:16:35.304231 systemd-logind[1798]: Removed session 24. Feb 13 20:16:40.366975 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:41816.service - OpenSSH per-connection server daemon (10.200.16.10:41816). Feb 13 20:16:40.811181 sshd[7010]: Accepted publickey for core from 10.200.16.10 port 41816 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:40.812701 sshd-session[7010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:40.817256 systemd-logind[1798]: New session 25 of user core. Feb 13 20:16:40.820961 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:16:41.223810 sshd[7013]: Connection closed by 10.200.16.10 port 41816 Feb 13 20:16:41.224219 sshd-session[7010]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:41.227841 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:41816.service: Deactivated successfully. Feb 13 20:16:41.231384 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:16:41.232747 systemd-logind[1798]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:16:41.235258 systemd-logind[1798]: Removed session 25. Feb 13 20:16:46.311896 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:41824.service - OpenSSH per-connection server daemon (10.200.16.10:41824). Feb 13 20:16:46.795522 sshd[7023]: Accepted publickey for core from 10.200.16.10 port 41824 ssh2: RSA SHA256:kLUjkQPZuV2HOHhCvQlkOhcZy8EI4D0W0rY+h58RxsI Feb 13 20:16:46.796931 sshd-session[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:46.801174 systemd-logind[1798]: New session 26 of user core. Feb 13 20:16:46.808149 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:16:47.198884 sshd[7026]: Connection closed by 10.200.16.10 port 41824 Feb 13 20:16:47.199128 sshd-session[7023]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:47.202969 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:41824.service: Deactivated successfully. Feb 13 20:16:47.207420 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:16:47.208153 systemd-logind[1798]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:16:47.209774 systemd-logind[1798]: Removed session 26.