Feb 13 18:58:34.335269 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 18:58:34.335292 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 18:58:34.335300 kernel: KASLR enabled Feb 13 18:58:34.335306 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 18:58:34.335313 kernel: printk: bootconsole [pl11] enabled Feb 13 18:58:34.335318 kernel: efi: EFI v2.7 by EDK II Feb 13 18:58:34.335325 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Feb 13 18:58:34.335331 kernel: random: crng init done Feb 13 18:58:34.335337 kernel: secureboot: Secure boot disabled Feb 13 18:58:34.335343 kernel: ACPI: Early table checksum verification disabled Feb 13 18:58:34.335348 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 18:58:34.335354 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335360 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335368 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 18:58:34.335375 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335381 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335387 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335394 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335400 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335406 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335412 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 18:58:34.335424 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335431 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 18:58:34.335437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 18:58:34.335443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 18:58:34.337491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 18:58:34.337515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 18:58:34.337522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 18:58:34.337533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 18:58:34.337539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 18:58:34.337545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 18:58:34.337551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 18:58:34.337558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 18:58:34.337564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 18:58:34.337570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 18:58:34.337576 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] Feb 13 18:58:34.337582 kernel: Zone ranges: Feb 13 18:58:34.337588 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 18:58:34.337594 kernel: DMA32 empty Feb 13 18:58:34.337600 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 18:58:34.337610 kernel: Movable zone start for each node Feb 13 18:58:34.337623 kernel: Early memory node ranges Feb 13 18:58:34.337630 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 18:58:34.337637 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Feb 13 18:58:34.337643 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Feb 13 18:58:34.337651 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Feb 13 18:58:34.337658 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 18:58:34.337664 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 18:58:34.337670 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 18:58:34.337677 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 18:58:34.337683 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 18:58:34.337690 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 18:58:34.337697 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 18:58:34.337703 kernel: psci: probing for conduit method from ACPI. Feb 13 18:58:34.337710 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 18:58:34.337716 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 18:58:34.337722 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 18:58:34.337730 kernel: psci: SMC Calling Convention v1.4 Feb 13 18:58:34.337736 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 18:58:34.337743 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 18:58:34.337749 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 18:58:34.337756 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 18:58:34.337762 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 18:58:34.337769 kernel: Detected PIPT I-cache on CPU0 Feb 13 18:58:34.337775 kernel: CPU features: detected: GIC system register CPU interface Feb 13 18:58:34.337782 kernel: CPU features: detected: Hardware dirty bit management Feb 13 18:58:34.337788 kernel: CPU features: detected: Spectre-BHB Feb 13 18:58:34.337795 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 18:58:34.337803 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 18:58:34.337809 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 18:58:34.337816 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 18:58:34.337822 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 18:58:34.337829 kernel: alternatives: applying boot alternatives Feb 13 18:58:34.337836 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:58:34.337843 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 18:58:34.337850 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 18:58:34.337857 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 18:58:34.337863 kernel: Fallback order for Node 0: 0 Feb 13 18:58:34.337869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 18:58:34.337877 kernel: Policy zone: Normal Feb 13 18:58:34.337884 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 18:58:34.337890 kernel: software IO TLB: area num 2. Feb 13 18:58:34.337897 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Feb 13 18:58:34.337903 kernel: Memory: 3982064K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212096K reserved, 0K cma-reserved) Feb 13 18:58:34.337910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 18:58:34.337916 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 18:58:34.337923 kernel: rcu: RCU event tracing is enabled. Feb 13 18:58:34.337930 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 18:58:34.337937 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 18:58:34.337943 kernel: Tracing variant of Tasks RCU enabled. Feb 13 18:58:34.337951 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 18:58:34.337958 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 18:58:34.337964 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 18:58:34.337971 kernel: GICv3: 960 SPIs implemented Feb 13 18:58:34.337977 kernel: GICv3: 0 Extended SPIs implemented Feb 13 18:58:34.337983 kernel: Root IRQ handler: gic_handle_irq Feb 13 18:58:34.337990 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 18:58:34.337996 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 18:58:34.338002 kernel: ITS: No ITS available, not enabling LPIs Feb 13 18:58:34.338009 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 18:58:34.338015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:34.338022 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 18:58:34.338030 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 18:58:34.338037 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 18:58:34.338043 kernel: Console: colour dummy device 80x25 Feb 13 18:58:34.338054 kernel: printk: console [tty1] enabled Feb 13 18:58:34.338061 kernel: ACPI: Core revision 20230628 Feb 13 18:58:34.338068 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 18:58:34.338075 kernel: pid_max: default: 32768 minimum: 301 Feb 13 18:58:34.338081 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 18:58:34.338088 kernel: landlock: Up and running. Feb 13 18:58:34.338096 kernel: SELinux: Initializing. Feb 13 18:58:34.338103 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338110 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338116 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:58:34.338123 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:58:34.338130 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 18:58:34.338137 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 18:58:34.338150 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 18:58:34.338157 kernel: rcu: Hierarchical SRCU implementation. Feb 13 18:58:34.338164 kernel: rcu: Max phase no-delay instances is 400. Feb 13 18:58:34.338171 kernel: Remapping and enabling EFI services. Feb 13 18:58:34.338178 kernel: smp: Bringing up secondary CPUs ... Feb 13 18:58:34.338186 kernel: Detected PIPT I-cache on CPU1 Feb 13 18:58:34.338193 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 18:58:34.338201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:34.338207 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 18:58:34.338214 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 18:58:34.338223 kernel: SMP: Total of 2 processors activated. Feb 13 18:58:34.338230 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 18:58:34.338237 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 18:58:34.338244 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 18:58:34.338251 kernel: CPU features: detected: CRC32 instructions Feb 13 18:58:34.338258 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 18:58:34.338265 kernel: CPU features: detected: LSE atomic instructions Feb 13 18:58:34.338272 kernel: CPU features: detected: Privileged Access Never Feb 13 18:58:34.338279 kernel: CPU: All CPU(s) started at EL1 Feb 13 18:58:34.338288 kernel: alternatives: applying system-wide alternatives Feb 13 18:58:34.338295 kernel: devtmpfs: initialized Feb 13 18:58:34.338302 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 18:58:34.338309 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 18:58:34.338316 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 18:58:34.338323 kernel: SMBIOS 3.1.0 present. Feb 13 18:58:34.338330 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 18:58:34.338337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 18:58:34.338344 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 18:58:34.338353 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 18:58:34.338360 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 18:58:34.338367 kernel: audit: initializing netlink subsys (disabled) Feb 13 18:58:34.338374 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 18:58:34.338381 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 18:58:34.338388 kernel: cpuidle: using governor menu Feb 13 18:58:34.338395 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 18:58:34.338402 kernel: ASID allocator initialised with 32768 entries Feb 13 18:58:34.338409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 18:58:34.338417 kernel: Serial: AMBA PL011 UART driver Feb 13 18:58:34.338424 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 18:58:34.338435 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 18:58:34.338443 kernel: Modules: 508880 pages in range for PLT usage Feb 13 18:58:34.338460 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 18:58:34.338468 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 18:58:34.338475 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 18:58:34.338481 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 18:58:34.338488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 18:58:34.338498 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 18:58:34.338505 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 18:58:34.338512 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 18:58:34.338519 kernel: ACPI: Added _OSI(Module Device) Feb 13 18:58:34.338526 kernel: ACPI: Added _OSI(Processor Device) Feb 13 18:58:34.338533 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 18:58:34.338540 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 18:58:34.338547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 18:58:34.338554 kernel: ACPI: Interpreter enabled Feb 13 18:58:34.338562 kernel: ACPI: Using GIC for interrupt routing Feb 13 18:58:34.338569 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 18:58:34.338576 kernel: printk: console [ttyAMA0] enabled Feb 13 18:58:34.338583 kernel: printk: bootconsole [pl11] disabled Feb 13 18:58:34.338590 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 18:58:34.338597 kernel: iommu: Default domain type: Translated Feb 13 18:58:34.338604 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 18:58:34.338611 kernel: efivars: Registered efivars operations Feb 13 18:58:34.338618 kernel: vgaarb: loaded Feb 13 18:58:34.338626 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 18:58:34.338633 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 18:58:34.338640 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 18:58:34.338647 kernel: pnp: PnP ACPI init Feb 13 18:58:34.338654 kernel: pnp: PnP ACPI: found 0 devices Feb 13 18:58:34.338661 kernel: NET: Registered PF_INET protocol family Feb 13 18:58:34.338668 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 18:58:34.338675 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 18:58:34.338683 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 18:58:34.338696 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 18:58:34.338704 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 18:58:34.338711 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 18:58:34.338718 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338725 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338732 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 18:58:34.338739 kernel: PCI: CLS 0 bytes, default 64 Feb 13 18:58:34.338746 kernel: kvm [1]: HYP mode not available Feb 13 18:58:34.338753 kernel: Initialise system trusted keyrings Feb 13 18:58:34.338762 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 18:58:34.338769 kernel: Key type asymmetric registered Feb 13 18:58:34.338776 kernel: Asymmetric key parser 'x509' registered Feb 13 18:58:34.338783 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 18:58:34.338790 kernel: io scheduler mq-deadline registered Feb 13 18:58:34.338797 kernel: io scheduler kyber registered Feb 13 18:58:34.338804 kernel: io scheduler bfq registered Feb 13 18:58:34.338811 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 18:58:34.338818 kernel: thunder_xcv, ver 1.0 Feb 13 18:58:34.338826 kernel: thunder_bgx, ver 1.0 Feb 13 18:58:34.338833 kernel: nicpf, ver 1.0 Feb 13 18:58:34.338840 kernel: nicvf, ver 1.0 Feb 13 18:58:34.339000 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 18:58:34.339073 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:58:33 UTC (1739473113) Feb 13 18:58:34.339083 kernel: efifb: probing for efifb Feb 13 18:58:34.339090 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 18:58:34.339097 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 18:58:34.339107 kernel: efifb: scrolling: redraw Feb 13 18:58:34.339113 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 18:58:34.339120 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 18:58:34.339127 kernel: fb0: EFI VGA frame buffer device Feb 13 18:58:34.339134 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 18:58:34.339141 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 18:58:34.339148 kernel: No ACPI PMU IRQ for CPU0 Feb 13 18:58:34.339155 kernel: No ACPI PMU IRQ for CPU1 Feb 13 18:58:34.339162 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 18:58:34.339170 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 18:58:34.339177 kernel: watchdog: Hard watchdog permanently disabled Feb 13 18:58:34.339185 kernel: NET: Registered PF_INET6 protocol family Feb 13 18:58:34.339192 kernel: Segment Routing with IPv6 Feb 13 18:58:34.339199 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 18:58:34.339206 kernel: NET: Registered PF_PACKET protocol family Feb 13 18:58:34.339212 kernel: Key type dns_resolver registered Feb 13 18:58:34.339219 kernel: registered taskstats version 1 Feb 13 18:58:34.339226 kernel: Loading compiled-in X.509 certificates Feb 13 18:58:34.339234 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 18:58:34.339242 kernel: Key type .fscrypt registered Feb 13 18:58:34.339248 kernel: Key type fscrypt-provisioning registered Feb 13 18:58:34.339255 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 18:58:34.339262 kernel: ima: Allocated hash algorithm: sha1 Feb 13 18:58:34.339269 kernel: ima: No architecture policies found Feb 13 18:58:34.339276 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 18:58:34.339283 kernel: clk: Disabling unused clocks Feb 13 18:58:34.339290 kernel: Freeing unused kernel memory: 39936K Feb 13 18:58:34.339299 kernel: Run /init as init process Feb 13 18:58:34.339306 kernel: with arguments: Feb 13 18:58:34.339313 kernel: /init Feb 13 18:58:34.339320 kernel: with environment: Feb 13 18:58:34.339326 kernel: HOME=/ Feb 13 18:58:34.339333 kernel: TERM=linux Feb 13 18:58:34.339340 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 18:58:34.339349 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:58:34.339361 systemd[1]: Detected virtualization microsoft. Feb 13 18:58:34.339368 systemd[1]: Detected architecture arm64. Feb 13 18:58:34.339376 systemd[1]: Running in initrd. Feb 13 18:58:34.339383 systemd[1]: No hostname configured, using default hostname. Feb 13 18:58:34.339390 systemd[1]: Hostname set to . Feb 13 18:58:34.339405 systemd[1]: Initializing machine ID from random generator. Feb 13 18:58:34.339413 systemd[1]: Queued start job for default target initrd.target. Feb 13 18:58:34.339420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:34.339430 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:34.339438 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 18:58:34.339446 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:58:34.341515 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 18:58:34.341526 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 18:58:34.341536 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 18:58:34.341551 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 18:58:34.341558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:34.341566 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:34.341574 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:58:34.341581 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:58:34.341588 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:58:34.341596 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:58:34.341603 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:58:34.341611 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:58:34.341621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 18:58:34.341632 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 18:58:34.341640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:34.341647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:34.341655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:34.341663 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:58:34.341670 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 18:58:34.341678 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:58:34.341687 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 18:58:34.341695 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 18:58:34.341702 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:58:34.341736 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 18:58:34.341759 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:58:34.341767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:34.341775 systemd-journald[218]: Journal started Feb 13 18:58:34.341797 systemd-journald[218]: Runtime Journal (/run/log/journal/8ca503a3528e4446a98f983c0e05025a) is 8.0M, max 78.5M, 70.5M free. Feb 13 18:58:34.347951 systemd-modules-load[219]: Inserted module 'overlay' Feb 13 18:58:34.382052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 18:58:34.382112 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:58:34.382127 kernel: Bridge firewalling registered Feb 13 18:58:34.391582 systemd-modules-load[219]: Inserted module 'br_netfilter' Feb 13 18:58:34.397253 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 18:58:34.410201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:34.417695 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 18:58:34.428836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:34.446590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:34.464614 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:34.473627 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:58:34.493906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:58:34.523815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:58:34.538626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:34.555484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:34.562185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:58:34.574583 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:34.598973 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 18:58:34.614155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:58:34.630704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:58:34.646725 dracut-cmdline[251]: dracut-dracut-053 Feb 13 18:58:34.651298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:34.668239 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:58:34.674688 systemd-resolved[255]: Positive Trust Anchors: Feb 13 18:58:34.674701 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:58:34.674731 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:58:34.679777 systemd-resolved[255]: Defaulting to hostname 'linux'. Feb 13 18:58:34.700606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:58:34.707908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:34.792467 kernel: SCSI subsystem initialized Feb 13 18:58:34.800468 kernel: Loading iSCSI transport class v2.0-870. Feb 13 18:58:34.811496 kernel: iscsi: registered transport (tcp) Feb 13 18:58:34.829131 kernel: iscsi: registered transport (qla4xxx) Feb 13 18:58:34.829173 kernel: QLogic iSCSI HBA Driver Feb 13 18:58:34.868889 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 18:58:34.883714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 18:58:34.914466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 18:58:34.914518 kernel: device-mapper: uevent: version 1.0.3 Feb 13 18:58:34.914529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 18:58:34.968477 kernel: raid6: neonx8 gen() 15771 MB/s Feb 13 18:58:34.989461 kernel: raid6: neonx4 gen() 15817 MB/s Feb 13 18:58:35.009460 kernel: raid6: neonx2 gen() 13211 MB/s Feb 13 18:58:35.030461 kernel: raid6: neonx1 gen() 10527 MB/s Feb 13 18:58:35.050459 kernel: raid6: int64x8 gen() 6793 MB/s Feb 13 18:58:35.070459 kernel: raid6: int64x4 gen() 7353 MB/s Feb 13 18:58:35.091462 kernel: raid6: int64x2 gen() 6109 MB/s Feb 13 18:58:35.115305 kernel: raid6: int64x1 gen() 5059 MB/s Feb 13 18:58:35.115316 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Feb 13 18:58:35.140108 kernel: raid6: .... xor() 12313 MB/s, rmw enabled Feb 13 18:58:35.140125 kernel: raid6: using neon recovery algorithm Feb 13 18:58:35.152725 kernel: xor: measuring software checksum speed Feb 13 18:58:35.152746 kernel: 8regs : 21590 MB/sec Feb 13 18:58:35.156452 kernel: 32regs : 21624 MB/sec Feb 13 18:58:35.160098 kernel: arm64_neon : 27851 MB/sec Feb 13 18:58:35.164502 kernel: xor: using function: arm64_neon (27851 MB/sec) Feb 13 18:58:35.214467 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 18:58:35.224212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:58:35.241594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:35.266105 systemd-udevd[438]: Using default interface naming scheme 'v255'. Feb 13 18:58:35.272786 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:35.298616 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 18:58:35.313541 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Feb 13 18:58:35.340189 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:58:35.355781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:58:35.394929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:35.416783 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 18:58:35.442400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 18:58:35.452270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:58:35.471565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:35.493272 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:58:35.528642 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 18:58:35.550516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:58:35.576357 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 18:58:35.576385 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 18:58:35.578786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:58:35.610440 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 18:58:35.610486 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 18:58:35.610496 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 18:58:35.610505 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 18:58:35.578909 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:35.638765 kernel: PTP clock support registered Feb 13 18:58:35.638785 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 18:58:35.637294 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:35.669344 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 18:58:35.669372 kernel: hv_vmbus: registering driver hv_utils Feb 13 18:58:35.651225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:35.696472 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 18:58:35.696497 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 18:58:35.696515 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 18:58:35.696525 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 18:58:35.696534 kernel: scsi host0: storvsc_host_t Feb 13 18:58:35.651392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:36.176246 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 18:58:36.176428 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 18:58:36.176597 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 18:58:36.176752 kernel: scsi host1: storvsc_host_t Feb 13 18:58:36.177061 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 18:58:35.689824 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:36.153773 systemd-resolved[255]: Clock change detected. Flushing caches. Feb 13 18:58:36.185769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:36.215180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:36.267923 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 18:58:36.268120 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 18:58:36.268132 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 18:58:36.268216 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 18:58:36.296770 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: VF slot 1 added Feb 13 18:58:36.296899 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 18:58:36.296989 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 18:58:36.297069 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 18:58:36.297147 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 18:58:36.297225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 18:58:36.297235 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 18:58:36.215270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:36.346265 kernel: hv_vmbus: registering driver hv_pci Feb 13 18:58:36.346300 kernel: hv_pci 702a1354-6378-4f54-a12a-5e48ebe1bd88: PCI VMBus probing: Using version 0x10004 Feb 13 18:58:36.415129 kernel: hv_pci 702a1354-6378-4f54-a12a-5e48ebe1bd88: PCI host bridge to bus 6378:00 Feb 13 18:58:36.415243 kernel: pci_bus 6378:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 18:58:36.415335 kernel: pci_bus 6378:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 18:58:36.415416 kernel: pci 6378:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 18:58:36.415530 kernel: pci 6378:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 18:58:36.415614 kernel: pci 6378:00:02.0: enabling Extended Tags Feb 13 18:58:36.415692 kernel: pci 6378:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6378:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 18:58:36.415769 kernel: pci_bus 6378:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 18:58:36.415841 kernel: pci 6378:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 18:58:36.245443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:36.332647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:36.378665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:36.450026 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:36.482505 kernel: mlx5_core 6378:00:02.0: enabling device (0000 -> 0002) Feb 13 18:58:36.701755 kernel: mlx5_core 6378:00:02.0: firmware version: 16.30.1284 Feb 13 18:58:36.701879 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: VF registering: eth1 Feb 13 18:58:36.701981 kernel: mlx5_core 6378:00:02.0 eth1: joined to eth0 Feb 13 18:58:36.702077 kernel: mlx5_core 6378:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 18:58:36.710458 kernel: mlx5_core 6378:00:02.0 enP25464s1: renamed from eth1 Feb 13 18:58:36.825579 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 18:58:36.969465 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (495) Feb 13 18:58:36.977456 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (492) Feb 13 18:58:36.988606 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 18:58:36.995613 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 18:58:37.017486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 18:58:37.040727 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 18:58:37.102279 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 18:58:38.071485 disk-uuid[605]: The operation has completed successfully. Feb 13 18:58:38.077367 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 18:58:38.138501 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 18:58:38.138602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 18:58:38.166612 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 18:58:38.181177 sh[694]: Success Feb 13 18:58:38.211486 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 18:58:38.443362 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 18:58:38.464579 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 18:58:38.473814 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 18:58:38.509660 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 18:58:38.509723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:38.517333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 18:58:38.523152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 18:58:38.528173 kernel: BTRFS info (device dm-0): using free space tree Feb 13 18:58:38.858163 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 18:58:38.863963 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 18:58:38.883745 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 18:58:38.891623 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 18:58:38.930119 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:38.930178 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:38.934769 kernel: BTRFS info (device sda6): using free space tree Feb 13 18:58:38.983371 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:58:38.999921 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 18:58:39.003694 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:58:39.013943 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 18:58:39.037453 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:39.043145 systemd-networkd[871]: lo: Link UP Feb 13 18:58:39.043739 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 18:58:39.046754 systemd-networkd[871]: lo: Gained carrier Feb 13 18:58:39.048862 systemd-networkd[871]: Enumeration completed Feb 13 18:58:39.059890 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:39.059894 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:58:39.060760 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:58:39.072127 systemd[1]: Reached target network.target - Network. Feb 13 18:58:39.107664 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 18:58:39.135461 kernel: mlx5_core 6378:00:02.0 enP25464s1: Link up Feb 13 18:58:39.180454 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: Data path switched to VF: enP25464s1 Feb 13 18:58:39.181477 systemd-networkd[871]: enP25464s1: Link UP Feb 13 18:58:39.181701 systemd-networkd[871]: eth0: Link UP Feb 13 18:58:39.182100 systemd-networkd[871]: eth0: Gained carrier Feb 13 18:58:39.182111 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:39.193601 systemd-networkd[871]: enP25464s1: Gained carrier Feb 13 18:58:39.216477 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 18:58:40.265989 ignition[879]: Ignition 2.20.0 Feb 13 18:58:40.266001 ignition[879]: Stage: fetch-offline Feb 13 18:58:40.271101 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:58:40.266035 ignition[879]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.266044 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.266129 ignition[879]: parsed url from cmdline: "" Feb 13 18:58:40.294579 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 18:58:40.266133 ignition[879]: no config URL provided Feb 13 18:58:40.266137 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:58:40.266144 ignition[879]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:58:40.266149 ignition[879]: failed to fetch config: resource requires networking Feb 13 18:58:40.266654 ignition[879]: Ignition finished successfully Feb 13 18:58:40.332699 ignition[887]: Ignition 2.20.0 Feb 13 18:58:40.332706 ignition[887]: Stage: fetch Feb 13 18:58:40.332905 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.332914 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.333028 ignition[887]: parsed url from cmdline: "" Feb 13 18:58:40.333035 ignition[887]: no config URL provided Feb 13 18:58:40.333040 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:58:40.333046 ignition[887]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:58:40.333072 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 18:58:40.459625 ignition[887]: GET result: OK Feb 13 18:58:40.459694 ignition[887]: config has been read from IMDS userdata Feb 13 18:58:40.459727 ignition[887]: parsing config with SHA512: 064372da013ba3872ca641a9e80a068f0be60c88a9547665b140cc4762963ddb8124ee65617bc1796686662e4cded6f82422251c6d6d11e23a923456a28b3aba Feb 13 18:58:40.463543 unknown[887]: fetched base config from "system" Feb 13 18:58:40.463842 ignition[887]: fetch: fetch complete Feb 13 18:58:40.463553 unknown[887]: fetched base config from "system" Feb 13 18:58:40.463847 ignition[887]: fetch: fetch passed Feb 13 18:58:40.463558 unknown[887]: fetched user config from "azure" Feb 13 18:58:40.463892 ignition[887]: Ignition finished successfully Feb 13 18:58:40.469032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 18:58:40.493634 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 18:58:40.514941 ignition[893]: Ignition 2.20.0 Feb 13 18:58:40.514948 ignition[893]: Stage: kargs Feb 13 18:58:40.517763 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 18:58:40.515122 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.515131 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.516030 ignition[893]: kargs: kargs passed Feb 13 18:58:40.516093 ignition[893]: Ignition finished successfully Feb 13 18:58:40.546857 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 18:58:40.568610 ignition[900]: Ignition 2.20.0 Feb 13 18:58:40.568621 ignition[900]: Stage: disks Feb 13 18:58:40.574468 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 18:58:40.568933 ignition[900]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.582380 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 18:58:40.568947 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.594403 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 18:58:40.569871 ignition[900]: disks: disks passed Feb 13 18:58:40.606578 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:58:40.569923 ignition[900]: Ignition finished successfully Feb 13 18:58:40.618456 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:58:40.633167 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:58:40.640111 systemd-networkd[871]: enP25464s1: Gained IPv6LL Feb 13 18:58:40.667737 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 18:58:40.760051 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 18:58:40.769793 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 18:58:40.787666 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 18:58:40.848457 kernel: EXT4-fs (sda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 18:58:40.848529 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 18:58:40.853941 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 18:58:40.901526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:58:40.912552 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 18:58:40.925959 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 18:58:40.933620 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 18:58:40.969165 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Feb 13 18:58:40.969189 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:40.933662 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:58:40.989502 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:40.977490 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 18:58:41.006224 kernel: BTRFS info (device sda6): using free space tree Feb 13 18:58:41.014488 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 18:58:41.014785 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 18:58:41.023252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:58:41.149601 systemd-networkd[871]: eth0: Gained IPv6LL Feb 13 18:58:41.573110 coreos-metadata[921]: Feb 13 18:58:41.573 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 18:58:41.590017 coreos-metadata[921]: Feb 13 18:58:41.589 INFO Fetch successful Feb 13 18:58:41.595585 coreos-metadata[921]: Feb 13 18:58:41.595 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 18:58:41.607690 coreos-metadata[921]: Feb 13 18:58:41.607 INFO Fetch successful Feb 13 18:58:41.618997 coreos-metadata[921]: Feb 13 18:58:41.618 INFO wrote hostname ci-4186.1.1-a-c0811b896b to /sysroot/etc/hostname Feb 13 18:58:41.629346 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 18:58:41.675428 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 18:58:41.715345 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Feb 13 18:58:41.740849 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 18:58:41.766116 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 18:58:42.840336 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 18:58:42.856640 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 18:58:42.865627 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 18:58:42.889467 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:42.889083 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 18:58:42.918936 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 18:58:42.930394 ignition[1039]: INFO : Ignition 2.20.0 Feb 13 18:58:42.930394 ignition[1039]: INFO : Stage: mount Feb 13 18:58:42.939514 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:42.939514 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:42.939514 ignition[1039]: INFO : mount: mount passed Feb 13 18:58:42.939514 ignition[1039]: INFO : Ignition finished successfully Feb 13 18:58:42.935745 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 18:58:42.967560 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 18:58:42.982752 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:58:43.013481 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Feb 13 18:58:43.027856 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:43.027910 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:43.032451 kernel: BTRFS info (device sda6): using free space tree Feb 13 18:58:43.040471 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 18:58:43.041408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:58:43.067679 ignition[1068]: INFO : Ignition 2.20.0 Feb 13 18:58:43.073325 ignition[1068]: INFO : Stage: files Feb 13 18:58:43.073325 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:43.073325 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:43.073325 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Feb 13 18:58:43.097216 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 18:58:43.097216 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 18:58:43.219619 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 18:58:43.220004 unknown[1068]: wrote ssh authorized keys file for user: core Feb 13 18:58:43.670482 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 18:58:43.905691 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.919148 ignition[1068]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:58:43.919148 ignition[1068]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:58:43.919148 ignition[1068]: INFO : files: files passed Feb 13 18:58:43.919148 ignition[1068]: INFO : Ignition finished successfully Feb 13 18:58:43.920584 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 18:58:43.953689 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 18:58:43.962615 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 18:58:43.986517 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 18:58:43.986628 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 18:58:44.015893 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:44.015893 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:44.001280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:58:44.047206 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:44.015206 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 18:58:44.047757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 18:58:44.085799 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 18:58:44.087470 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 18:58:44.098515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 18:58:44.111105 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 18:58:44.122798 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 18:58:44.136701 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 18:58:44.158832 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:58:44.177749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 18:58:44.195684 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 18:58:44.197463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 18:58:44.217249 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:44.224074 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:44.237011 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 18:58:44.248848 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 18:58:44.248919 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:58:44.266120 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 18:58:44.278635 systemd[1]: Stopped target basic.target - Basic System. Feb 13 18:58:44.289470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 18:58:44.300612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:58:44.315072 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 18:58:44.327267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 18:58:44.339260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:58:44.352024 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 18:58:44.365745 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 18:58:44.376850 systemd[1]: Stopped target swap.target - Swaps. Feb 13 18:58:44.387105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 18:58:44.387183 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:58:44.404540 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:44.417680 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:44.431288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 18:58:44.437681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:44.445168 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 18:58:44.445241 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 18:58:44.465074 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 18:58:44.465129 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:58:44.477653 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 18:58:44.477701 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 18:58:44.489180 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 18:58:44.489233 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 18:58:44.521577 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 18:58:44.557972 ignition[1121]: INFO : Ignition 2.20.0 Feb 13 18:58:44.557972 ignition[1121]: INFO : Stage: umount Feb 13 18:58:44.557972 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:44.557972 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:44.557972 ignition[1121]: INFO : umount: umount passed Feb 13 18:58:44.557972 ignition[1121]: INFO : Ignition finished successfully Feb 13 18:58:44.527186 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 18:58:44.527252 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:44.558561 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 18:58:44.571908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 18:58:44.571986 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:44.584677 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 18:58:44.584738 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:58:44.598623 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 18:58:44.598724 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 18:58:44.608629 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 18:58:44.608706 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 18:58:44.614491 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 18:58:44.614540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 18:58:44.625009 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 18:58:44.625055 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 18:58:44.637269 systemd[1]: Stopped target network.target - Network. Feb 13 18:58:44.649504 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 18:58:44.649569 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:58:44.661533 systemd[1]: Stopped target paths.target - Path Units. Feb 13 18:58:44.673166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 18:58:44.685188 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:44.692623 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 18:58:44.697804 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 18:58:44.710516 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 18:58:44.710574 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:58:44.721521 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 18:58:44.721566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:58:44.733186 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 18:58:44.733249 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 18:58:44.739328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 18:58:44.739377 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 18:58:44.750397 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 18:58:44.756840 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 18:58:44.768490 systemd-networkd[871]: eth0: DHCPv6 lease lost Feb 13 18:58:45.006670 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: Data path switched from VF: enP25464s1 Feb 13 18:58:44.769811 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 18:58:44.773818 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 18:58:44.773943 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 18:58:44.782595 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 18:58:44.782686 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 18:58:44.795395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 18:58:44.795482 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:44.828675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 18:58:44.838426 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 18:58:44.838518 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:58:44.853817 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:58:44.853881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:44.868235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 18:58:44.868302 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:44.879472 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 18:58:44.879528 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:44.891662 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:44.907123 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 18:58:44.907227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 18:58:44.934541 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 18:58:44.934658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 18:58:44.946246 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 18:58:44.946384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:44.959102 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 18:58:44.959174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:44.970341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 18:58:44.970386 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:44.982353 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 18:58:44.982406 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:58:45.006504 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 18:58:45.006567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 18:58:45.017349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:58:45.017414 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:45.070640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 18:58:45.087700 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 18:58:45.087778 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:45.294260 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 18:58:45.100625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:45.100678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:45.114114 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 18:58:45.114220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 18:58:45.125247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 18:58:45.126459 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 18:58:45.138210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 18:58:45.168688 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 18:58:45.198197 systemd[1]: Switching root. Feb 13 18:58:45.345010 systemd-journald[218]: Journal stopped Feb 13 18:58:34.335269 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 18:58:34.335292 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 18:58:34.335300 kernel: KASLR enabled Feb 13 18:58:34.335306 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 18:58:34.335313 kernel: printk: bootconsole [pl11] enabled Feb 13 18:58:34.335318 kernel: efi: EFI v2.7 by EDK II Feb 13 18:58:34.335325 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Feb 13 18:58:34.335331 kernel: random: crng init done Feb 13 18:58:34.335337 kernel: secureboot: Secure boot disabled Feb 13 18:58:34.335343 kernel: ACPI: Early table checksum verification disabled Feb 13 18:58:34.335348 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 18:58:34.335354 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335360 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335368 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 18:58:34.335375 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335381 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335387 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335394 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335400 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335406 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335412 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 18:58:34.335424 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 18:58:34.335431 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 18:58:34.335437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 18:58:34.335443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 18:58:34.337491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 18:58:34.337515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 18:58:34.337522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 18:58:34.337533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 18:58:34.337539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 18:58:34.337545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 18:58:34.337551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 18:58:34.337558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 18:58:34.337564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 18:58:34.337570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 18:58:34.337576 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] Feb 13 18:58:34.337582 kernel: Zone ranges: Feb 13 18:58:34.337588 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 18:58:34.337594 kernel: DMA32 empty Feb 13 18:58:34.337600 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 18:58:34.337610 kernel: Movable zone start for each node Feb 13 18:58:34.337623 kernel: Early memory node ranges Feb 13 18:58:34.337630 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 18:58:34.337637 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Feb 13 18:58:34.337643 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Feb 13 18:58:34.337651 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Feb 13 18:58:34.337658 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 18:58:34.337664 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 18:58:34.337670 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 18:58:34.337677 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 18:58:34.337683 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 18:58:34.337690 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 18:58:34.337697 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 18:58:34.337703 kernel: psci: probing for conduit method from ACPI. Feb 13 18:58:34.337710 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 18:58:34.337716 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 18:58:34.337722 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 18:58:34.337730 kernel: psci: SMC Calling Convention v1.4 Feb 13 18:58:34.337736 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 18:58:34.337743 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 18:58:34.337749 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 18:58:34.337756 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 18:58:34.337762 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 18:58:34.337769 kernel: Detected PIPT I-cache on CPU0 Feb 13 18:58:34.337775 kernel: CPU features: detected: GIC system register CPU interface Feb 13 18:58:34.337782 kernel: CPU features: detected: Hardware dirty bit management Feb 13 18:58:34.337788 kernel: CPU features: detected: Spectre-BHB Feb 13 18:58:34.337795 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 18:58:34.337803 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 18:58:34.337809 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 18:58:34.337816 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 18:58:34.337822 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 18:58:34.337829 kernel: alternatives: applying boot alternatives Feb 13 18:58:34.337836 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:58:34.337843 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 18:58:34.337850 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 18:58:34.337857 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 18:58:34.337863 kernel: Fallback order for Node 0: 0 Feb 13 18:58:34.337869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 18:58:34.337877 kernel: Policy zone: Normal Feb 13 18:58:34.337884 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 18:58:34.337890 kernel: software IO TLB: area num 2. Feb 13 18:58:34.337897 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Feb 13 18:58:34.337903 kernel: Memory: 3982064K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212096K reserved, 0K cma-reserved) Feb 13 18:58:34.337910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 18:58:34.337916 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 18:58:34.337923 kernel: rcu: RCU event tracing is enabled. Feb 13 18:58:34.337930 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 18:58:34.337937 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 18:58:34.337943 kernel: Tracing variant of Tasks RCU enabled. Feb 13 18:58:34.337951 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 18:58:34.337958 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 18:58:34.337964 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 18:58:34.337971 kernel: GICv3: 960 SPIs implemented Feb 13 18:58:34.337977 kernel: GICv3: 0 Extended SPIs implemented Feb 13 18:58:34.337983 kernel: Root IRQ handler: gic_handle_irq Feb 13 18:58:34.337990 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 18:58:34.337996 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 18:58:34.338002 kernel: ITS: No ITS available, not enabling LPIs Feb 13 18:58:34.338009 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 18:58:34.338015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:34.338022 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 18:58:34.338030 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 18:58:34.338037 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 18:58:34.338043 kernel: Console: colour dummy device 80x25 Feb 13 18:58:34.338054 kernel: printk: console [tty1] enabled Feb 13 18:58:34.338061 kernel: ACPI: Core revision 20230628 Feb 13 18:58:34.338068 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 18:58:34.338075 kernel: pid_max: default: 32768 minimum: 301 Feb 13 18:58:34.338081 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 18:58:34.338088 kernel: landlock: Up and running. Feb 13 18:58:34.338096 kernel: SELinux: Initializing. Feb 13 18:58:34.338103 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338110 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338116 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:58:34.338123 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:58:34.338130 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 18:58:34.338137 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 18:58:34.338150 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 18:58:34.338157 kernel: rcu: Hierarchical SRCU implementation. Feb 13 18:58:34.338164 kernel: rcu: Max phase no-delay instances is 400. Feb 13 18:58:34.338171 kernel: Remapping and enabling EFI services. Feb 13 18:58:34.338178 kernel: smp: Bringing up secondary CPUs ... Feb 13 18:58:34.338186 kernel: Detected PIPT I-cache on CPU1 Feb 13 18:58:34.338193 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 18:58:34.338201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:34.338207 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 18:58:34.338214 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 18:58:34.338223 kernel: SMP: Total of 2 processors activated. Feb 13 18:58:34.338230 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 18:58:34.338237 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 18:58:34.338244 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 18:58:34.338251 kernel: CPU features: detected: CRC32 instructions Feb 13 18:58:34.338258 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 18:58:34.338265 kernel: CPU features: detected: LSE atomic instructions Feb 13 18:58:34.338272 kernel: CPU features: detected: Privileged Access Never Feb 13 18:58:34.338279 kernel: CPU: All CPU(s) started at EL1 Feb 13 18:58:34.338288 kernel: alternatives: applying system-wide alternatives Feb 13 18:58:34.338295 kernel: devtmpfs: initialized Feb 13 18:58:34.338302 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 18:58:34.338309 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 18:58:34.338316 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 18:58:34.338323 kernel: SMBIOS 3.1.0 present. Feb 13 18:58:34.338330 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 18:58:34.338337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 18:58:34.338344 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 18:58:34.338353 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 18:58:34.338360 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 18:58:34.338367 kernel: audit: initializing netlink subsys (disabled) Feb 13 18:58:34.338374 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 18:58:34.338381 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 18:58:34.338388 kernel: cpuidle: using governor menu Feb 13 18:58:34.338395 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 18:58:34.338402 kernel: ASID allocator initialised with 32768 entries Feb 13 18:58:34.338409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 18:58:34.338417 kernel: Serial: AMBA PL011 UART driver Feb 13 18:58:34.338424 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 18:58:34.338435 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 18:58:34.338443 kernel: Modules: 508880 pages in range for PLT usage Feb 13 18:58:34.338460 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 18:58:34.338468 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 18:58:34.338475 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 18:58:34.338481 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 18:58:34.338488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 18:58:34.338498 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 18:58:34.338505 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 18:58:34.338512 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 18:58:34.338519 kernel: ACPI: Added _OSI(Module Device) Feb 13 18:58:34.338526 kernel: ACPI: Added _OSI(Processor Device) Feb 13 18:58:34.338533 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 18:58:34.338540 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 18:58:34.338547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 18:58:34.338554 kernel: ACPI: Interpreter enabled Feb 13 18:58:34.338562 kernel: ACPI: Using GIC for interrupt routing Feb 13 18:58:34.338569 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 18:58:34.338576 kernel: printk: console [ttyAMA0] enabled Feb 13 18:58:34.338583 kernel: printk: bootconsole [pl11] disabled Feb 13 18:58:34.338590 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 18:58:34.338597 kernel: iommu: Default domain type: Translated Feb 13 18:58:34.338604 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 18:58:34.338611 kernel: efivars: Registered efivars operations Feb 13 18:58:34.338618 kernel: vgaarb: loaded Feb 13 18:58:34.338626 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 18:58:34.338633 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 18:58:34.338640 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 18:58:34.338647 kernel: pnp: PnP ACPI init Feb 13 18:58:34.338654 kernel: pnp: PnP ACPI: found 0 devices Feb 13 18:58:34.338661 kernel: NET: Registered PF_INET protocol family Feb 13 18:58:34.338668 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 18:58:34.338675 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 18:58:34.338683 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 18:58:34.338696 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 18:58:34.338704 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 18:58:34.338711 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 18:58:34.338718 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338725 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:58:34.338732 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 18:58:34.338739 kernel: PCI: CLS 0 bytes, default 64 Feb 13 18:58:34.338746 kernel: kvm [1]: HYP mode not available Feb 13 18:58:34.338753 kernel: Initialise system trusted keyrings Feb 13 18:58:34.338762 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 18:58:34.338769 kernel: Key type asymmetric registered Feb 13 18:58:34.338776 kernel: Asymmetric key parser 'x509' registered Feb 13 18:58:34.338783 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 18:58:34.338790 kernel: io scheduler mq-deadline registered Feb 13 18:58:34.338797 kernel: io scheduler kyber registered Feb 13 18:58:34.338804 kernel: io scheduler bfq registered Feb 13 18:58:34.338811 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 18:58:34.338818 kernel: thunder_xcv, ver 1.0 Feb 13 18:58:34.338826 kernel: thunder_bgx, ver 1.0 Feb 13 18:58:34.338833 kernel: nicpf, ver 1.0 Feb 13 18:58:34.338840 kernel: nicvf, ver 1.0 Feb 13 18:58:34.339000 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 18:58:34.339073 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:58:33 UTC (1739473113) Feb 13 18:58:34.339083 kernel: efifb: probing for efifb Feb 13 18:58:34.339090 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 18:58:34.339097 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 18:58:34.339107 kernel: efifb: scrolling: redraw Feb 13 18:58:34.339113 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 18:58:34.339120 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 18:58:34.339127 kernel: fb0: EFI VGA frame buffer device Feb 13 18:58:34.339134 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 18:58:34.339141 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 18:58:34.339148 kernel: No ACPI PMU IRQ for CPU0 Feb 13 18:58:34.339155 kernel: No ACPI PMU IRQ for CPU1 Feb 13 18:58:34.339162 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 18:58:34.339170 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 18:58:34.339177 kernel: watchdog: Hard watchdog permanently disabled Feb 13 18:58:34.339185 kernel: NET: Registered PF_INET6 protocol family Feb 13 18:58:34.339192 kernel: Segment Routing with IPv6 Feb 13 18:58:34.339199 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 18:58:34.339206 kernel: NET: Registered PF_PACKET protocol family Feb 13 18:58:34.339212 kernel: Key type dns_resolver registered Feb 13 18:58:34.339219 kernel: registered taskstats version 1 Feb 13 18:58:34.339226 kernel: Loading compiled-in X.509 certificates Feb 13 18:58:34.339234 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 18:58:34.339242 kernel: Key type .fscrypt registered Feb 13 18:58:34.339248 kernel: Key type fscrypt-provisioning registered Feb 13 18:58:34.339255 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 18:58:34.339262 kernel: ima: Allocated hash algorithm: sha1 Feb 13 18:58:34.339269 kernel: ima: No architecture policies found Feb 13 18:58:34.339276 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 18:58:34.339283 kernel: clk: Disabling unused clocks Feb 13 18:58:34.339290 kernel: Freeing unused kernel memory: 39936K Feb 13 18:58:34.339299 kernel: Run /init as init process Feb 13 18:58:34.339306 kernel: with arguments: Feb 13 18:58:34.339313 kernel: /init Feb 13 18:58:34.339320 kernel: with environment: Feb 13 18:58:34.339326 kernel: HOME=/ Feb 13 18:58:34.339333 kernel: TERM=linux Feb 13 18:58:34.339340 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 18:58:34.339349 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:58:34.339361 systemd[1]: Detected virtualization microsoft. Feb 13 18:58:34.339368 systemd[1]: Detected architecture arm64. Feb 13 18:58:34.339376 systemd[1]: Running in initrd. Feb 13 18:58:34.339383 systemd[1]: No hostname configured, using default hostname. Feb 13 18:58:34.339390 systemd[1]: Hostname set to . Feb 13 18:58:34.339405 systemd[1]: Initializing machine ID from random generator. Feb 13 18:58:34.339413 systemd[1]: Queued start job for default target initrd.target. Feb 13 18:58:34.339420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:34.339430 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:34.339438 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 18:58:34.339446 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:58:34.341515 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 18:58:34.341526 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 18:58:34.341536 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 18:58:34.341551 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 18:58:34.341558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:34.341566 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:34.341574 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:58:34.341581 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:58:34.341588 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:58:34.341596 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:58:34.341603 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:58:34.341611 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:58:34.341621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 18:58:34.341632 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 18:58:34.341640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:34.341647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:34.341655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:34.341663 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:58:34.341670 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 18:58:34.341678 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:58:34.341687 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 18:58:34.341695 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 18:58:34.341702 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:58:34.341736 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 18:58:34.341759 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:58:34.341767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:34.341775 systemd-journald[218]: Journal started Feb 13 18:58:34.341797 systemd-journald[218]: Runtime Journal (/run/log/journal/8ca503a3528e4446a98f983c0e05025a) is 8.0M, max 78.5M, 70.5M free. Feb 13 18:58:34.347951 systemd-modules-load[219]: Inserted module 'overlay' Feb 13 18:58:34.382052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 18:58:34.382112 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:58:34.382127 kernel: Bridge firewalling registered Feb 13 18:58:34.391582 systemd-modules-load[219]: Inserted module 'br_netfilter' Feb 13 18:58:34.397253 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 18:58:34.410201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:34.417695 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 18:58:34.428836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:34.446590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:34.464614 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:34.473627 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:58:34.493906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:58:34.523815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:58:34.538626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:34.555484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:34.562185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:58:34.574583 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:34.598973 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 18:58:34.614155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:58:34.630704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:58:34.646725 dracut-cmdline[251]: dracut-dracut-053 Feb 13 18:58:34.651298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:34.668239 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:58:34.674688 systemd-resolved[255]: Positive Trust Anchors: Feb 13 18:58:34.674701 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:58:34.674731 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:58:34.679777 systemd-resolved[255]: Defaulting to hostname 'linux'. Feb 13 18:58:34.700606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:58:34.707908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:34.792467 kernel: SCSI subsystem initialized Feb 13 18:58:34.800468 kernel: Loading iSCSI transport class v2.0-870. Feb 13 18:58:34.811496 kernel: iscsi: registered transport (tcp) Feb 13 18:58:34.829131 kernel: iscsi: registered transport (qla4xxx) Feb 13 18:58:34.829173 kernel: QLogic iSCSI HBA Driver Feb 13 18:58:34.868889 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 18:58:34.883714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 18:58:34.914466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 18:58:34.914518 kernel: device-mapper: uevent: version 1.0.3 Feb 13 18:58:34.914529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 18:58:34.968477 kernel: raid6: neonx8 gen() 15771 MB/s Feb 13 18:58:34.989461 kernel: raid6: neonx4 gen() 15817 MB/s Feb 13 18:58:35.009460 kernel: raid6: neonx2 gen() 13211 MB/s Feb 13 18:58:35.030461 kernel: raid6: neonx1 gen() 10527 MB/s Feb 13 18:58:35.050459 kernel: raid6: int64x8 gen() 6793 MB/s Feb 13 18:58:35.070459 kernel: raid6: int64x4 gen() 7353 MB/s Feb 13 18:58:35.091462 kernel: raid6: int64x2 gen() 6109 MB/s Feb 13 18:58:35.115305 kernel: raid6: int64x1 gen() 5059 MB/s Feb 13 18:58:35.115316 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Feb 13 18:58:35.140108 kernel: raid6: .... xor() 12313 MB/s, rmw enabled Feb 13 18:58:35.140125 kernel: raid6: using neon recovery algorithm Feb 13 18:58:35.152725 kernel: xor: measuring software checksum speed Feb 13 18:58:35.152746 kernel: 8regs : 21590 MB/sec Feb 13 18:58:35.156452 kernel: 32regs : 21624 MB/sec Feb 13 18:58:35.160098 kernel: arm64_neon : 27851 MB/sec Feb 13 18:58:35.164502 kernel: xor: using function: arm64_neon (27851 MB/sec) Feb 13 18:58:35.214467 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 18:58:35.224212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:58:35.241594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:35.266105 systemd-udevd[438]: Using default interface naming scheme 'v255'. Feb 13 18:58:35.272786 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:35.298616 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 18:58:35.313541 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Feb 13 18:58:35.340189 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:58:35.355781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:58:35.394929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:35.416783 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 18:58:35.442400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 18:58:35.452270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:58:35.471565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:35.493272 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:58:35.528642 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 18:58:35.550516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:58:35.576357 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 18:58:35.576385 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 18:58:35.578786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:58:35.610440 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 18:58:35.610486 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 18:58:35.610496 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 18:58:35.610505 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 18:58:35.578909 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:35.638765 kernel: PTP clock support registered Feb 13 18:58:35.638785 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 18:58:35.637294 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:35.669344 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 18:58:35.669372 kernel: hv_vmbus: registering driver hv_utils Feb 13 18:58:35.651225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:35.696472 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 18:58:35.696497 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 18:58:35.696515 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 18:58:35.696525 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 18:58:35.696534 kernel: scsi host0: storvsc_host_t Feb 13 18:58:35.651392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:36.176246 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 18:58:36.176428 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 18:58:36.176597 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 18:58:36.176752 kernel: scsi host1: storvsc_host_t Feb 13 18:58:36.177061 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 18:58:35.689824 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:36.153773 systemd-resolved[255]: Clock change detected. Flushing caches. Feb 13 18:58:36.185769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:36.215180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:36.267923 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 18:58:36.268120 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 18:58:36.268132 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 18:58:36.268216 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 18:58:36.296770 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: VF slot 1 added Feb 13 18:58:36.296899 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 18:58:36.296989 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 18:58:36.297069 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 18:58:36.297147 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 18:58:36.297225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 18:58:36.297235 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 18:58:36.215270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:36.346265 kernel: hv_vmbus: registering driver hv_pci Feb 13 18:58:36.346300 kernel: hv_pci 702a1354-6378-4f54-a12a-5e48ebe1bd88: PCI VMBus probing: Using version 0x10004 Feb 13 18:58:36.415129 kernel: hv_pci 702a1354-6378-4f54-a12a-5e48ebe1bd88: PCI host bridge to bus 6378:00 Feb 13 18:58:36.415243 kernel: pci_bus 6378:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 18:58:36.415335 kernel: pci_bus 6378:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 18:58:36.415416 kernel: pci 6378:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 18:58:36.415530 kernel: pci 6378:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 18:58:36.415614 kernel: pci 6378:00:02.0: enabling Extended Tags Feb 13 18:58:36.415692 kernel: pci 6378:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6378:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 18:58:36.415769 kernel: pci_bus 6378:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 18:58:36.415841 kernel: pci 6378:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 18:58:36.245443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:36.332647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:36.378665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:36.450026 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:36.482505 kernel: mlx5_core 6378:00:02.0: enabling device (0000 -> 0002) Feb 13 18:58:36.701755 kernel: mlx5_core 6378:00:02.0: firmware version: 16.30.1284 Feb 13 18:58:36.701879 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: VF registering: eth1 Feb 13 18:58:36.701981 kernel: mlx5_core 6378:00:02.0 eth1: joined to eth0 Feb 13 18:58:36.702077 kernel: mlx5_core 6378:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 18:58:36.710458 kernel: mlx5_core 6378:00:02.0 enP25464s1: renamed from eth1 Feb 13 18:58:36.825579 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 18:58:36.969465 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (495) Feb 13 18:58:36.977456 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (492) Feb 13 18:58:36.988606 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 18:58:36.995613 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 18:58:37.017486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 18:58:37.040727 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 18:58:37.102279 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 18:58:38.071485 disk-uuid[605]: The operation has completed successfully. Feb 13 18:58:38.077367 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 18:58:38.138501 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 18:58:38.138602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 18:58:38.166612 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 18:58:38.181177 sh[694]: Success Feb 13 18:58:38.211486 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 18:58:38.443362 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 18:58:38.464579 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 18:58:38.473814 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 18:58:38.509660 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 18:58:38.509723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:38.517333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 18:58:38.523152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 18:58:38.528173 kernel: BTRFS info (device dm-0): using free space tree Feb 13 18:58:38.858163 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 18:58:38.863963 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 18:58:38.883745 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 18:58:38.891623 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 18:58:38.930119 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:38.930178 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:38.934769 kernel: BTRFS info (device sda6): using free space tree Feb 13 18:58:38.983371 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:58:38.999921 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 18:58:39.003694 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:58:39.013943 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 18:58:39.037453 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:39.043145 systemd-networkd[871]: lo: Link UP Feb 13 18:58:39.043739 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 18:58:39.046754 systemd-networkd[871]: lo: Gained carrier Feb 13 18:58:39.048862 systemd-networkd[871]: Enumeration completed Feb 13 18:58:39.059890 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:39.059894 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:58:39.060760 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:58:39.072127 systemd[1]: Reached target network.target - Network. Feb 13 18:58:39.107664 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 18:58:39.135461 kernel: mlx5_core 6378:00:02.0 enP25464s1: Link up Feb 13 18:58:39.180454 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: Data path switched to VF: enP25464s1 Feb 13 18:58:39.181477 systemd-networkd[871]: enP25464s1: Link UP Feb 13 18:58:39.181701 systemd-networkd[871]: eth0: Link UP Feb 13 18:58:39.182100 systemd-networkd[871]: eth0: Gained carrier Feb 13 18:58:39.182111 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:39.193601 systemd-networkd[871]: enP25464s1: Gained carrier Feb 13 18:58:39.216477 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 18:58:40.265989 ignition[879]: Ignition 2.20.0 Feb 13 18:58:40.266001 ignition[879]: Stage: fetch-offline Feb 13 18:58:40.271101 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:58:40.266035 ignition[879]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.266044 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.266129 ignition[879]: parsed url from cmdline: "" Feb 13 18:58:40.294579 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 18:58:40.266133 ignition[879]: no config URL provided Feb 13 18:58:40.266137 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:58:40.266144 ignition[879]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:58:40.266149 ignition[879]: failed to fetch config: resource requires networking Feb 13 18:58:40.266654 ignition[879]: Ignition finished successfully Feb 13 18:58:40.332699 ignition[887]: Ignition 2.20.0 Feb 13 18:58:40.332706 ignition[887]: Stage: fetch Feb 13 18:58:40.332905 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.332914 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.333028 ignition[887]: parsed url from cmdline: "" Feb 13 18:58:40.333035 ignition[887]: no config URL provided Feb 13 18:58:40.333040 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:58:40.333046 ignition[887]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:58:40.333072 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 18:58:40.459625 ignition[887]: GET result: OK Feb 13 18:58:40.459694 ignition[887]: config has been read from IMDS userdata Feb 13 18:58:40.459727 ignition[887]: parsing config with SHA512: 064372da013ba3872ca641a9e80a068f0be60c88a9547665b140cc4762963ddb8124ee65617bc1796686662e4cded6f82422251c6d6d11e23a923456a28b3aba Feb 13 18:58:40.463543 unknown[887]: fetched base config from "system" Feb 13 18:58:40.463842 ignition[887]: fetch: fetch complete Feb 13 18:58:40.463553 unknown[887]: fetched base config from "system" Feb 13 18:58:40.463847 ignition[887]: fetch: fetch passed Feb 13 18:58:40.463558 unknown[887]: fetched user config from "azure" Feb 13 18:58:40.463892 ignition[887]: Ignition finished successfully Feb 13 18:58:40.469032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 18:58:40.493634 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 18:58:40.514941 ignition[893]: Ignition 2.20.0 Feb 13 18:58:40.514948 ignition[893]: Stage: kargs Feb 13 18:58:40.517763 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 18:58:40.515122 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.515131 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.516030 ignition[893]: kargs: kargs passed Feb 13 18:58:40.516093 ignition[893]: Ignition finished successfully Feb 13 18:58:40.546857 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 18:58:40.568610 ignition[900]: Ignition 2.20.0 Feb 13 18:58:40.568621 ignition[900]: Stage: disks Feb 13 18:58:40.574468 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 18:58:40.568933 ignition[900]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:40.582380 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 18:58:40.568947 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:40.594403 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 18:58:40.569871 ignition[900]: disks: disks passed Feb 13 18:58:40.606578 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:58:40.569923 ignition[900]: Ignition finished successfully Feb 13 18:58:40.618456 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:58:40.633167 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:58:40.640111 systemd-networkd[871]: enP25464s1: Gained IPv6LL Feb 13 18:58:40.667737 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 18:58:40.760051 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 18:58:40.769793 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 18:58:40.787666 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 18:58:40.848457 kernel: EXT4-fs (sda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 18:58:40.848529 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 18:58:40.853941 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 18:58:40.901526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:58:40.912552 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 18:58:40.925959 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 18:58:40.933620 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 18:58:40.969165 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Feb 13 18:58:40.969189 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:40.933662 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:58:40.989502 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:40.977490 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 18:58:41.006224 kernel: BTRFS info (device sda6): using free space tree Feb 13 18:58:41.014488 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 18:58:41.014785 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 18:58:41.023252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:58:41.149601 systemd-networkd[871]: eth0: Gained IPv6LL Feb 13 18:58:41.573110 coreos-metadata[921]: Feb 13 18:58:41.573 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 18:58:41.590017 coreos-metadata[921]: Feb 13 18:58:41.589 INFO Fetch successful Feb 13 18:58:41.595585 coreos-metadata[921]: Feb 13 18:58:41.595 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 18:58:41.607690 coreos-metadata[921]: Feb 13 18:58:41.607 INFO Fetch successful Feb 13 18:58:41.618997 coreos-metadata[921]: Feb 13 18:58:41.618 INFO wrote hostname ci-4186.1.1-a-c0811b896b to /sysroot/etc/hostname Feb 13 18:58:41.629346 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 18:58:41.675428 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 18:58:41.715345 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Feb 13 18:58:41.740849 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 18:58:41.766116 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 18:58:42.840336 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 18:58:42.856640 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 18:58:42.865627 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 18:58:42.889467 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:42.889083 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 18:58:42.918936 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 18:58:42.930394 ignition[1039]: INFO : Ignition 2.20.0 Feb 13 18:58:42.930394 ignition[1039]: INFO : Stage: mount Feb 13 18:58:42.939514 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:42.939514 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:42.939514 ignition[1039]: INFO : mount: mount passed Feb 13 18:58:42.939514 ignition[1039]: INFO : Ignition finished successfully Feb 13 18:58:42.935745 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 18:58:42.967560 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 18:58:42.982752 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:58:43.013481 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Feb 13 18:58:43.027856 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:43.027910 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:43.032451 kernel: BTRFS info (device sda6): using free space tree Feb 13 18:58:43.040471 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 18:58:43.041408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:58:43.067679 ignition[1068]: INFO : Ignition 2.20.0 Feb 13 18:58:43.073325 ignition[1068]: INFO : Stage: files Feb 13 18:58:43.073325 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:43.073325 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:43.073325 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Feb 13 18:58:43.097216 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 18:58:43.097216 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 18:58:43.219619 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.228210 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 18:58:43.220004 unknown[1068]: wrote ssh authorized keys file for user: core Feb 13 18:58:43.670482 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 18:58:43.905691 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:58:43.919148 ignition[1068]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:58:43.919148 ignition[1068]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:58:43.919148 ignition[1068]: INFO : files: files passed Feb 13 18:58:43.919148 ignition[1068]: INFO : Ignition finished successfully Feb 13 18:58:43.920584 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 18:58:43.953689 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 18:58:43.962615 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 18:58:43.986517 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 18:58:43.986628 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 18:58:44.015893 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:44.015893 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:44.001280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:58:44.047206 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:44.015206 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 18:58:44.047757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 18:58:44.085799 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 18:58:44.087470 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 18:58:44.098515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 18:58:44.111105 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 18:58:44.122798 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 18:58:44.136701 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 18:58:44.158832 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:58:44.177749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 18:58:44.195684 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 18:58:44.197463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 18:58:44.217249 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:44.224074 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:44.237011 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 18:58:44.248848 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 18:58:44.248919 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:58:44.266120 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 18:58:44.278635 systemd[1]: Stopped target basic.target - Basic System. Feb 13 18:58:44.289470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 18:58:44.300612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:58:44.315072 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 18:58:44.327267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 18:58:44.339260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:58:44.352024 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 18:58:44.365745 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 18:58:44.376850 systemd[1]: Stopped target swap.target - Swaps. Feb 13 18:58:44.387105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 18:58:44.387183 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:58:44.404540 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:44.417680 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:44.431288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 18:58:44.437681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:44.445168 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 18:58:44.445241 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 18:58:44.465074 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 18:58:44.465129 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:58:44.477653 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 18:58:44.477701 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 18:58:44.489180 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 18:58:44.489233 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 18:58:44.521577 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 18:58:44.557972 ignition[1121]: INFO : Ignition 2.20.0 Feb 13 18:58:44.557972 ignition[1121]: INFO : Stage: umount Feb 13 18:58:44.557972 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:44.557972 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 18:58:44.557972 ignition[1121]: INFO : umount: umount passed Feb 13 18:58:44.557972 ignition[1121]: INFO : Ignition finished successfully Feb 13 18:58:44.527186 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 18:58:44.527252 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:44.558561 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 18:58:44.571908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 18:58:44.571986 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:44.584677 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 18:58:44.584738 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:58:44.598623 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 18:58:44.598724 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 18:58:44.608629 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 18:58:44.608706 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 18:58:44.614491 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 18:58:44.614540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 18:58:44.625009 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 18:58:44.625055 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 18:58:44.637269 systemd[1]: Stopped target network.target - Network. Feb 13 18:58:44.649504 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 18:58:44.649569 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:58:44.661533 systemd[1]: Stopped target paths.target - Path Units. Feb 13 18:58:44.673166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 18:58:44.685188 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:44.692623 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 18:58:44.697804 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 18:58:44.710516 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 18:58:44.710574 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:58:44.721521 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 18:58:44.721566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:58:44.733186 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 18:58:44.733249 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 18:58:44.739328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 18:58:44.739377 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 18:58:44.750397 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 18:58:44.756840 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 18:58:44.768490 systemd-networkd[871]: eth0: DHCPv6 lease lost Feb 13 18:58:45.006670 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: Data path switched from VF: enP25464s1 Feb 13 18:58:44.769811 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 18:58:44.773818 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 18:58:44.773943 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 18:58:44.782595 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 18:58:44.782686 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 18:58:44.795395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 18:58:44.795482 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:44.828675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 18:58:44.838426 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 18:58:44.838518 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:58:44.853817 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:58:44.853881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:44.868235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 18:58:44.868302 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:44.879472 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 18:58:44.879528 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:44.891662 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:44.907123 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 18:58:44.907227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 18:58:44.934541 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 18:58:44.934658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 18:58:44.946246 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 18:58:44.946384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:44.959102 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 18:58:44.959174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:44.970341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 18:58:44.970386 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:44.982353 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 18:58:44.982406 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:58:45.006504 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 18:58:45.006567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 18:58:45.017349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:58:45.017414 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:45.070640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 18:58:45.087700 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 18:58:45.087778 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:45.294260 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 18:58:45.100625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:45.100678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:45.114114 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 18:58:45.114220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 18:58:45.125247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 18:58:45.126459 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 18:58:45.138210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 18:58:45.168688 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 18:58:45.198197 systemd[1]: Switching root. Feb 13 18:58:45.345010 systemd-journald[218]: Journal stopped Feb 13 18:58:49.874425 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 18:58:49.874472 kernel: SELinux: policy capability open_perms=1 Feb 13 18:58:49.874483 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 18:58:49.874490 kernel: SELinux: policy capability always_check_network=0 Feb 13 18:58:49.874502 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 18:58:49.874510 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 18:58:49.874518 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 18:58:49.874527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 18:58:49.874537 kernel: audit: type=1403 audit(1739473126.547:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 18:58:49.874546 systemd[1]: Successfully loaded SELinux policy in 144.189ms. Feb 13 18:58:49.874558 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.979ms. Feb 13 18:58:49.874568 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:58:49.874576 systemd[1]: Detected virtualization microsoft. Feb 13 18:58:49.874584 systemd[1]: Detected architecture arm64. Feb 13 18:58:49.874593 systemd[1]: Detected first boot. Feb 13 18:58:49.874604 systemd[1]: Hostname set to . Feb 13 18:58:49.874613 systemd[1]: Initializing machine ID from random generator. Feb 13 18:58:49.874621 zram_generator::config[1163]: No configuration found. Feb 13 18:58:49.874631 systemd[1]: Populated /etc with preset unit settings. Feb 13 18:58:49.874639 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 18:58:49.874647 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 18:58:49.874656 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 18:58:49.874667 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 18:58:49.874676 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 18:58:49.874685 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 18:58:49.874693 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 18:58:49.874702 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 18:58:49.874711 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 18:58:49.874720 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 18:58:49.874731 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 18:58:49.874740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:49.874749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:49.874758 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 18:58:49.874766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 18:58:49.874775 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 18:58:49.874784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:58:49.874793 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 18:58:49.874804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:49.874813 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 18:58:49.874822 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 18:58:49.874833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 18:58:49.874842 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 18:58:49.874851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:49.874860 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:58:49.874869 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:58:49.874879 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:58:49.874888 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 18:58:49.874897 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 18:58:49.874906 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:49.874915 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:49.874925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:49.874936 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 18:58:49.874946 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 18:58:49.874955 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 18:58:49.874964 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 18:58:49.874973 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 18:58:49.874982 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 18:58:49.874991 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 18:58:49.875002 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 18:58:49.875011 systemd[1]: Reached target machines.target - Containers. Feb 13 18:58:49.875021 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 18:58:49.875030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:49.875039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:58:49.875049 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 18:58:49.875058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:49.875067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:58:49.875077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:49.875087 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 18:58:49.875096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:49.875105 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 18:58:49.875115 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 18:58:49.875124 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 18:58:49.875134 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 18:58:49.875143 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 18:58:49.875154 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:58:49.875163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:58:49.875172 kernel: fuse: init (API version 7.39) Feb 13 18:58:49.875180 kernel: ACPI: bus type drm_connector registered Feb 13 18:58:49.875188 kernel: loop: module loaded Feb 13 18:58:49.875197 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 18:58:49.875206 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 18:58:49.875215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:58:49.875224 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 18:58:49.875235 systemd[1]: Stopped verity-setup.service. Feb 13 18:58:49.875274 systemd-journald[1266]: Collecting audit messages is disabled. Feb 13 18:58:49.875294 systemd-journald[1266]: Journal started Feb 13 18:58:49.875319 systemd-journald[1266]: Runtime Journal (/run/log/journal/5f5111cb9459471ebdaccfcae319dc12) is 8.0M, max 78.5M, 70.5M free. Feb 13 18:58:48.843134 systemd[1]: Queued start job for default target multi-user.target. Feb 13 18:58:48.960402 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 18:58:48.960787 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 18:58:48.961094 systemd[1]: systemd-journald.service: Consumed 3.210s CPU time. Feb 13 18:58:49.889360 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:58:49.890219 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 18:58:49.896670 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 18:58:49.903286 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 18:58:49.909019 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 18:58:49.915368 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 18:58:49.921949 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 18:58:49.929461 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 18:58:49.938292 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:49.946299 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 18:58:49.947482 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 18:58:49.954990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:49.955130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:49.961982 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:58:49.962107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:58:49.969025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:49.969154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:49.976715 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 18:58:49.976850 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 18:58:49.983993 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:49.984131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:49.991487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:49.998362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 18:58:50.005829 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 18:58:50.013982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:50.029271 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 18:58:50.040613 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 18:58:50.051606 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 18:58:50.058546 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 18:58:50.058588 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:58:50.066430 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 18:58:50.075031 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 18:58:50.082780 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 18:58:50.089000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:50.090994 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 18:58:50.099645 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 18:58:50.106310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:58:50.112637 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 18:58:50.120236 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:58:50.121319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:58:50.138645 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 18:58:50.147350 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 18:58:50.167685 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 18:58:50.176407 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 18:58:50.193274 systemd-journald[1266]: Time spent on flushing to /var/log/journal/5f5111cb9459471ebdaccfcae319dc12 is 17.367ms for 882 entries. Feb 13 18:58:50.193274 systemd-journald[1266]: System Journal (/var/log/journal/5f5111cb9459471ebdaccfcae319dc12) is 8.0M, max 2.6G, 2.6G free. Feb 13 18:58:50.244429 kernel: loop0: detected capacity change from 0 to 113552 Feb 13 18:58:50.244486 systemd-journald[1266]: Received client request to flush runtime journal. Feb 13 18:58:50.188989 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 18:58:50.205490 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 18:58:50.213221 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 18:58:50.227483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:50.236759 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 18:58:50.250322 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 18:58:50.257643 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 18:58:50.267750 udevadm[1300]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 18:58:50.311148 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 18:58:50.311830 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 18:58:50.596454 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 18:58:50.661668 kernel: loop1: detected capacity change from 0 to 116784 Feb 13 18:58:50.696330 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 18:58:50.708616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:58:50.808374 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Feb 13 18:58:50.808392 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Feb 13 18:58:50.812771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:50.984459 kernel: loop2: detected capacity change from 0 to 28752 Feb 13 18:58:51.316469 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 18:58:51.329605 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:51.362146 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Feb 13 18:58:51.383460 kernel: loop3: detected capacity change from 0 to 201592 Feb 13 18:58:51.424459 kernel: loop4: detected capacity change from 0 to 113552 Feb 13 18:58:51.440463 kernel: loop5: detected capacity change from 0 to 116784 Feb 13 18:58:51.451456 kernel: loop6: detected capacity change from 0 to 28752 Feb 13 18:58:51.462455 kernel: loop7: detected capacity change from 0 to 201592 Feb 13 18:58:51.468812 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 18:58:51.469228 (sd-merge)[1323]: Merged extensions into '/usr'. Feb 13 18:58:51.473207 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 18:58:51.473219 systemd[1]: Reloading... Feb 13 18:58:51.600516 zram_generator::config[1370]: No configuration found. Feb 13 18:58:51.682519 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 18:58:51.737482 kernel: hv_vmbus: registering driver hv_balloon Feb 13 18:58:51.751106 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 18:58:51.751195 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 18:58:51.774848 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 18:58:51.774937 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 18:58:51.784955 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 18:58:51.778576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:58:51.799963 kernel: Console: switching to colour dummy device 80x25 Feb 13 18:58:51.811313 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 18:58:51.857758 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 18:58:51.857904 systemd[1]: Reloading finished in 384 ms. Feb 13 18:58:51.866784 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1333) Feb 13 18:58:51.890642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:51.904658 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 18:58:51.950771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 18:58:51.965672 systemd[1]: Starting ensure-sysext.service... Feb 13 18:58:51.970912 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 18:58:51.981311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:58:51.996662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:58:52.008693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:52.020493 systemd[1]: Reloading requested from client PID 1501 ('systemctl') (unit ensure-sysext.service)... Feb 13 18:58:52.020509 systemd[1]: Reloading... Feb 13 18:58:52.037240 systemd-tmpfiles[1504]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 18:58:52.037514 systemd-tmpfiles[1504]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 18:58:52.038186 systemd-tmpfiles[1504]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 18:58:52.038409 systemd-tmpfiles[1504]: ACLs are not supported, ignoring. Feb 13 18:58:52.039369 systemd-tmpfiles[1504]: ACLs are not supported, ignoring. Feb 13 18:58:52.057961 systemd-tmpfiles[1504]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:58:52.057972 systemd-tmpfiles[1504]: Skipping /boot Feb 13 18:58:52.067200 systemd-tmpfiles[1504]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:58:52.067351 systemd-tmpfiles[1504]: Skipping /boot Feb 13 18:58:52.118481 zram_generator::config[1538]: No configuration found. Feb 13 18:58:52.226136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:58:52.305963 systemd[1]: Reloading finished in 285 ms. Feb 13 18:58:52.321584 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 18:58:52.333895 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 18:58:52.342669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:52.351818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:52.374656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:58:52.381702 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 18:58:52.388573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:52.390419 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 18:58:52.406286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:52.419532 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:52.436728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:52.444624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:52.447688 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 18:58:52.456291 lvm[1602]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:58:52.467677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:58:52.476178 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 18:58:52.486733 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 18:58:52.499480 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 18:58:52.507493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:52.510211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:52.518040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:52.518219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:52.527865 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:52.528001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:52.543519 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 18:58:52.556466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:52.563054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:52.570277 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 18:58:52.582731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:52.583771 lvm[1631]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:58:52.592764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:52.605999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:52.613463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:52.614312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 18:58:52.625037 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 18:58:52.635280 augenrules[1644]: No rules Feb 13 18:58:52.640166 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:58:52.642479 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:58:52.650173 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 18:58:52.658260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:52.658409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:52.665451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:52.667506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:52.677009 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:52.677173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:52.700688 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:58:52.708759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:52.712124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:52.731701 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:58:52.731928 systemd-networkd[1503]: lo: Link UP Feb 13 18:58:52.732166 systemd-networkd[1503]: lo: Gained carrier Feb 13 18:58:52.734067 systemd-networkd[1503]: Enumeration completed Feb 13 18:58:52.735278 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:52.735358 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:58:52.736807 systemd-resolved[1613]: Positive Trust Anchors: Feb 13 18:58:52.737122 systemd-resolved[1613]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:58:52.737268 systemd-resolved[1613]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:58:52.744528 augenrules[1655]: /sbin/augenrules: No change Feb 13 18:58:52.749783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:52.757487 augenrules[1676]: No rules Feb 13 18:58:52.759728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:52.767929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:52.769395 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 18:58:52.776139 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:58:52.783165 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:58:52.784290 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:58:52.794350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:52.794981 kernel: mlx5_core 6378:00:02.0 enP25464s1: Link up Feb 13 18:58:52.794642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:52.802020 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:58:52.802164 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:58:52.809031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:52.809162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:52.810804 systemd-resolved[1613]: Using system hostname 'ci-4186.1.1-a-c0811b896b'. Feb 13 18:58:52.816845 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:52.817618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:52.833053 kernel: hv_netvsc 0022487b-bff3-0022-487b-bff30022487b eth0: Data path switched to VF: enP25464s1 Feb 13 18:58:52.833405 systemd-networkd[1503]: enP25464s1: Link UP Feb 13 18:58:52.834009 systemd-networkd[1503]: eth0: Link UP Feb 13 18:58:52.834013 systemd-networkd[1503]: eth0: Gained carrier Feb 13 18:58:52.834029 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:52.836119 systemd[1]: Finished ensure-sysext.service. Feb 13 18:58:52.841049 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:58:52.842693 systemd-networkd[1503]: enP25464s1: Gained carrier Feb 13 18:58:52.850044 systemd[1]: Reached target network.target - Network. Feb 13 18:58:52.855195 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:52.864567 systemd-networkd[1503]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 18:58:52.865573 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 18:58:52.872013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:58:52.872092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:58:52.996498 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 18:58:53.005133 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 18:58:54.525661 systemd-networkd[1503]: enP25464s1: Gained IPv6LL Feb 13 18:58:54.845619 systemd-networkd[1503]: eth0: Gained IPv6LL Feb 13 18:58:54.848155 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 18:58:54.856069 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 18:58:55.945180 ldconfig[1292]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 18:58:55.958495 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 18:58:55.969635 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 18:58:55.985401 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 18:58:55.991998 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:58:55.998247 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 18:58:56.005402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 18:58:56.012643 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 18:58:56.018748 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 18:58:56.025847 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 18:58:56.034996 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 18:58:56.035035 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:58:56.042074 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:58:56.067787 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 18:58:56.075765 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 18:58:56.088009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 18:58:56.095030 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 18:58:56.101319 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:58:56.106649 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:58:56.112158 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:58:56.112186 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:58:56.123533 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 18:58:56.131580 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 18:58:56.142678 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 18:58:56.152797 (chronyd)[1696]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 18:58:56.160645 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 18:58:56.168242 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 18:58:56.176084 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 18:58:56.178631 jq[1700]: false Feb 13 18:58:56.184812 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 18:58:56.184863 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 18:58:56.189278 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 18:58:56.197199 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 18:58:56.200722 chronyd[1709]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 18:58:56.201865 KVP[1705]: KVP starting; pid is:1705 Feb 13 18:58:56.204588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:56.203184 chronyd[1709]: Timezone right/UTC failed leap second check, ignoring Feb 13 18:58:56.203382 chronyd[1709]: Loaded seccomp filter (level 2) Feb 13 18:58:56.212925 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 18:58:56.221711 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 18:58:56.230651 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 18:58:56.241220 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 18:58:56.255836 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 18:58:56.263154 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 18:58:56.269395 extend-filesystems[1704]: Found loop4 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found loop5 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found loop6 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found loop7 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda1 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda2 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda3 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found usr Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda4 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda6 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda7 Feb 13 18:58:56.269395 extend-filesystems[1704]: Found sda9 Feb 13 18:58:56.269395 extend-filesystems[1704]: Checking size of /dev/sda9 Feb 13 18:58:56.614390 kernel: hv_utils: KVP IC version 4.0 Feb 13 18:58:56.614485 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1748) Feb 13 18:58:56.263724 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 18:58:56.317559 KVP[1705]: KVP LIC Version: 3.1 Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.447 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.467 INFO Fetch successful Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.467 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.473 INFO Fetch successful Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.477 INFO Fetching http://168.63.129.16/machine/9e43bf31-ba81-41a9-828c-a6af1ffb95aa/c1bcfcdd%2D22c9%2D48df%2Db1dd%2Db33207f0f821.%5Fci%2D4186.1.1%2Da%2Dc0811b896b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.479 INFO Fetch successful Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.480 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 18:58:56.618125 coreos-metadata[1698]: Feb 13 18:58:56.494 INFO Fetch successful Feb 13 18:58:56.618391 extend-filesystems[1704]: Old size kept for /dev/sda9 Feb 13 18:58:56.618391 extend-filesystems[1704]: Found sr0 Feb 13 18:58:56.265541 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 18:58:56.339230 dbus-daemon[1699]: [system] SELinux support is enabled Feb 13 18:58:56.654028 update_engine[1723]: I20250213 18:58:56.390348 1723 main.cc:92] Flatcar Update Engine starting Feb 13 18:58:56.654028 update_engine[1723]: I20250213 18:58:56.392833 1723 update_check_scheduler.cc:74] Next update check in 7m20s Feb 13 18:58:56.280560 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 18:58:56.448358 dbus-daemon[1699]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 18:58:56.658683 jq[1727]: true Feb 13 18:58:56.306191 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 18:58:56.324941 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 18:58:56.325124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 18:58:56.659166 jq[1752]: true Feb 13 18:58:56.326809 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 18:58:56.327010 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 18:58:56.338838 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 18:58:56.663416 bash[1805]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:58:56.339577 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 18:58:56.355076 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 18:58:56.371347 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 18:58:56.381472 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 18:58:56.385031 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 18:58:56.389279 systemd-logind[1717]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 18:58:56.390916 systemd-logind[1717]: New seat seat0. Feb 13 18:58:56.404130 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 18:58:56.447614 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 18:58:56.447650 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 18:58:56.449153 (ntainerd)[1753]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 18:58:56.460776 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 18:58:56.460798 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 18:58:56.475205 systemd[1]: Started update-engine.service - Update Engine. Feb 13 18:58:56.502746 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 18:58:56.602676 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 18:58:56.627285 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 18:58:56.656548 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 18:58:56.683960 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 18:58:56.761601 locksmithd[1772]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 18:58:56.961710 containerd[1753]: time="2025-02-13T18:58:56.959856500Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 18:58:57.012180 containerd[1753]: time="2025-02-13T18:58:57.012119580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:57.018014 containerd[1753]: time="2025-02-13T18:58:57.017964220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:57.018156 containerd[1753]: time="2025-02-13T18:58:57.018140340Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 18:58:57.018240 containerd[1753]: time="2025-02-13T18:58:57.018226620Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 18:58:57.018473 containerd[1753]: time="2025-02-13T18:58:57.018454260Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 18:58:57.018927 containerd[1753]: time="2025-02-13T18:58:57.018909500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:57.019065 containerd[1753]: time="2025-02-13T18:58:57.019046940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:57.019122 containerd[1753]: time="2025-02-13T18:58:57.019109460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:57.019696 containerd[1753]: time="2025-02-13T18:58:57.019672860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:57.019775 containerd[1753]: time="2025-02-13T18:58:57.019761900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:57.020007 containerd[1753]: time="2025-02-13T18:58:57.019990140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:57.020070 containerd[1753]: time="2025-02-13T18:58:57.020057020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:57.020199 containerd[1753]: time="2025-02-13T18:58:57.020183380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:57.021907 containerd[1753]: time="2025-02-13T18:58:57.021499580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:57.021907 containerd[1753]: time="2025-02-13T18:58:57.021624300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:57.021907 containerd[1753]: time="2025-02-13T18:58:57.021638180Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 18:58:57.021907 containerd[1753]: time="2025-02-13T18:58:57.021736620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 18:58:57.021907 containerd[1753]: time="2025-02-13T18:58:57.021779740Z" level=info msg="metadata content store policy set" policy=shared Feb 13 18:58:57.035926 containerd[1753]: time="2025-02-13T18:58:57.035882780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 18:58:57.036111 containerd[1753]: time="2025-02-13T18:58:57.036097620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 18:58:57.037143 containerd[1753]: time="2025-02-13T18:58:57.036854660Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 18:58:57.037143 containerd[1753]: time="2025-02-13T18:58:57.036898140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 18:58:57.037143 containerd[1753]: time="2025-02-13T18:58:57.036916980Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 18:58:57.037143 containerd[1753]: time="2025-02-13T18:58:57.037089780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037577060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037697660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037713900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037728900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037743340Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037757020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037768660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037784140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037798580Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037811500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037825620Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037836460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037855900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038712 containerd[1753]: time="2025-02-13T18:58:57.037875220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037888220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037901260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037912780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037925220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037937180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037949540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037963300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037977900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037988380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.037998860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.038010500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.038024780Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.038044580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.038057140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.038983 containerd[1753]: time="2025-02-13T18:58:57.038070820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038121860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038139980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038151980Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038164220Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038173660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038188780Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038198940Z" level=info msg="NRI interface is disabled by configuration." Feb 13 18:58:57.039247 containerd[1753]: time="2025-02-13T18:58:57.038209020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 18:58:57.039652 containerd[1753]: time="2025-02-13T18:58:57.039599860Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 18:58:57.040896 containerd[1753]: time="2025-02-13T18:58:57.039852020Z" level=info msg="Connect containerd service" Feb 13 18:58:57.040896 containerd[1753]: time="2025-02-13T18:58:57.039902860Z" level=info msg="using legacy CRI server" Feb 13 18:58:57.040896 containerd[1753]: time="2025-02-13T18:58:57.039910940Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 18:58:57.040896 containerd[1753]: time="2025-02-13T18:58:57.040032180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 18:58:57.042116 containerd[1753]: time="2025-02-13T18:58:57.042093140Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:58:57.042307 containerd[1753]: time="2025-02-13T18:58:57.042282180Z" level=info msg="Start subscribing containerd event" Feb 13 18:58:57.043304 containerd[1753]: time="2025-02-13T18:58:57.043109580Z" level=info msg="Start recovering state" Feb 13 18:58:57.043304 containerd[1753]: time="2025-02-13T18:58:57.043183300Z" level=info msg="Start event monitor" Feb 13 18:58:57.043304 containerd[1753]: time="2025-02-13T18:58:57.043194260Z" level=info msg="Start snapshots syncer" Feb 13 18:58:57.043304 containerd[1753]: time="2025-02-13T18:58:57.043204900Z" level=info msg="Start cni network conf syncer for default" Feb 13 18:58:57.043304 containerd[1753]: time="2025-02-13T18:58:57.043212540Z" level=info msg="Start streaming server" Feb 13 18:58:57.043708 containerd[1753]: time="2025-02-13T18:58:57.043691260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 18:58:57.044355 containerd[1753]: time="2025-02-13T18:58:57.044288020Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 18:58:57.048833 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 18:58:57.056297 containerd[1753]: time="2025-02-13T18:58:57.056066180Z" level=info msg="containerd successfully booted in 0.099456s" Feb 13 18:58:57.377620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:57.389737 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:58:57.774908 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 18:58:57.797512 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 18:58:57.814427 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 18:58:57.822576 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 18:58:57.831101 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 18:58:57.832280 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 18:58:57.850124 kubelet[1852]: E0213 18:58:57.849938 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:58:57.850663 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 18:58:57.858352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:58:57.858610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:58:57.864584 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 18:58:57.883562 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 18:58:57.895788 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 18:58:57.902703 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 18:58:57.911310 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 18:58:57.917862 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 18:58:57.928518 systemd[1]: Startup finished in 690ms (kernel) + 12.211s (initrd) + 11.523s (userspace) = 24.426s. Feb 13 18:58:57.966376 agetty[1882]: failed to open credentials directory Feb 13 18:58:57.966477 agetty[1883]: failed to open credentials directory Feb 13 18:58:58.170824 login[1882]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 18:58:58.172412 login[1883]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:58.181778 systemd-logind[1717]: New session 2 of user core. Feb 13 18:58:58.182323 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 18:58:58.192702 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 18:58:58.202613 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 18:58:58.205210 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 18:58:58.218686 (systemd)[1890]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 18:58:58.338169 systemd[1890]: Queued start job for default target default.target. Feb 13 18:58:58.342826 systemd[1890]: Created slice app.slice - User Application Slice. Feb 13 18:58:58.343038 systemd[1890]: Reached target paths.target - Paths. Feb 13 18:58:58.343108 systemd[1890]: Reached target timers.target - Timers. Feb 13 18:58:58.344319 systemd[1890]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 18:58:58.354454 systemd[1890]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 18:58:58.354514 systemd[1890]: Reached target sockets.target - Sockets. Feb 13 18:58:58.354525 systemd[1890]: Reached target basic.target - Basic System. Feb 13 18:58:58.354567 systemd[1890]: Reached target default.target - Main User Target. Feb 13 18:58:58.354592 systemd[1890]: Startup finished in 130ms. Feb 13 18:58:58.354840 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 18:58:58.362650 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 18:58:59.171352 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:59.177513 systemd-logind[1717]: New session 1 of user core. Feb 13 18:58:59.183620 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 18:58:59.656499 waagent[1878]: 2025-02-13T18:58:59.655797Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 18:58:59.661756 waagent[1878]: 2025-02-13T18:58:59.661691Z INFO Daemon Daemon OS: flatcar 4186.1.1 Feb 13 18:58:59.666676 waagent[1878]: 2025-02-13T18:58:59.666620Z INFO Daemon Daemon Python: 3.11.10 Feb 13 18:58:59.671305 waagent[1878]: 2025-02-13T18:58:59.671247Z INFO Daemon Daemon Run daemon Feb 13 18:58:59.675749 waagent[1878]: 2025-02-13T18:58:59.675677Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.1' Feb 13 18:58:59.684913 waagent[1878]: 2025-02-13T18:58:59.684839Z INFO Daemon Daemon Using waagent for provisioning Feb 13 18:58:59.690421 waagent[1878]: 2025-02-13T18:58:59.690372Z INFO Daemon Daemon Activate resource disk Feb 13 18:58:59.695325 waagent[1878]: 2025-02-13T18:58:59.695268Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 18:58:59.708652 waagent[1878]: 2025-02-13T18:58:59.708586Z INFO Daemon Daemon Found device: None Feb 13 18:58:59.713392 waagent[1878]: 2025-02-13T18:58:59.713335Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 18:58:59.722163 waagent[1878]: 2025-02-13T18:58:59.722102Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 18:58:59.733766 waagent[1878]: 2025-02-13T18:58:59.733713Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 18:58:59.739672 waagent[1878]: 2025-02-13T18:58:59.739618Z INFO Daemon Daemon Running default provisioning handler Feb 13 18:58:59.752924 waagent[1878]: 2025-02-13T18:58:59.752836Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 18:58:59.767837 waagent[1878]: 2025-02-13T18:58:59.767768Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 18:58:59.778345 waagent[1878]: 2025-02-13T18:58:59.778280Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 18:58:59.784026 waagent[1878]: 2025-02-13T18:58:59.783968Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 18:58:59.932230 waagent[1878]: 2025-02-13T18:58:59.928833Z INFO Daemon Daemon Successfully mounted dvd Feb 13 18:58:59.944076 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 18:58:59.946539 waagent[1878]: 2025-02-13T18:58:59.945656Z INFO Daemon Daemon Detect protocol endpoint Feb 13 18:58:59.950891 waagent[1878]: 2025-02-13T18:58:59.950823Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 18:58:59.957010 waagent[1878]: 2025-02-13T18:58:59.956933Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 18:58:59.963932 waagent[1878]: 2025-02-13T18:58:59.963869Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 18:58:59.969870 waagent[1878]: 2025-02-13T18:58:59.969815Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 18:58:59.975095 waagent[1878]: 2025-02-13T18:58:59.975045Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 18:59:00.039239 waagent[1878]: 2025-02-13T18:59:00.039190Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 18:59:00.046406 waagent[1878]: 2025-02-13T18:59:00.046373Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 18:59:00.052065 waagent[1878]: 2025-02-13T18:59:00.052004Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 18:59:00.352765 waagent[1878]: 2025-02-13T18:59:00.352609Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 18:59:00.359555 waagent[1878]: 2025-02-13T18:59:00.359491Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 18:59:00.369039 waagent[1878]: 2025-02-13T18:59:00.368985Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 18:59:00.389284 waagent[1878]: 2025-02-13T18:59:00.389240Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 18:59:00.395410 waagent[1878]: 2025-02-13T18:59:00.395361Z INFO Daemon Feb 13 18:59:00.398507 waagent[1878]: 2025-02-13T18:59:00.398456Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 958b8b93-a41e-44db-8696-dd51c6f17f57 eTag: 17710242198871558040 source: Fabric] Feb 13 18:59:00.410551 waagent[1878]: 2025-02-13T18:59:00.410506Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 18:59:00.417872 waagent[1878]: 2025-02-13T18:59:00.417824Z INFO Daemon Feb 13 18:59:00.420679 waagent[1878]: 2025-02-13T18:59:00.420634Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 18:59:00.436288 waagent[1878]: 2025-02-13T18:59:00.436247Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 18:59:00.522099 waagent[1878]: 2025-02-13T18:59:00.522005Z INFO Daemon Downloaded certificate {'thumbprint': '49CA5F25BD8B42698D3E0B782196954E1B83CEA9', 'hasPrivateKey': False} Feb 13 18:59:00.532602 waagent[1878]: 2025-02-13T18:59:00.532547Z INFO Daemon Downloaded certificate {'thumbprint': '2CD95095AAB9076DCA705A8E982D803E70815848', 'hasPrivateKey': True} Feb 13 18:59:00.542901 waagent[1878]: 2025-02-13T18:59:00.542852Z INFO Daemon Fetch goal state completed Feb 13 18:59:00.554908 waagent[1878]: 2025-02-13T18:59:00.554843Z INFO Daemon Daemon Starting provisioning Feb 13 18:59:00.560044 waagent[1878]: 2025-02-13T18:59:00.559984Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 18:59:00.564926 waagent[1878]: 2025-02-13T18:59:00.564873Z INFO Daemon Daemon Set hostname [ci-4186.1.1-a-c0811b896b] Feb 13 18:59:00.636352 waagent[1878]: 2025-02-13T18:59:00.636275Z INFO Daemon Daemon Publish hostname [ci-4186.1.1-a-c0811b896b] Feb 13 18:59:00.643035 waagent[1878]: 2025-02-13T18:59:00.642968Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 18:59:00.649552 waagent[1878]: 2025-02-13T18:59:00.649497Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 18:59:00.692444 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:59:00.693243 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:59:00.693677 waagent[1878]: 2025-02-13T18:59:00.693303Z INFO Daemon Daemon Create user account if not exists Feb 13 18:59:00.693278 systemd-networkd[1503]: eth0: DHCP lease lost Feb 13 18:59:00.699365 waagent[1878]: 2025-02-13T18:59:00.699295Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 18:59:00.700543 systemd-networkd[1503]: eth0: DHCPv6 lease lost Feb 13 18:59:00.705181 waagent[1878]: 2025-02-13T18:59:00.705113Z INFO Daemon Daemon Configure sudoer Feb 13 18:59:00.710193 waagent[1878]: 2025-02-13T18:59:00.710128Z INFO Daemon Daemon Configure sshd Feb 13 18:59:00.715894 waagent[1878]: 2025-02-13T18:59:00.715787Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 18:59:00.729175 waagent[1878]: 2025-02-13T18:59:00.729107Z INFO Daemon Daemon Deploy ssh public key. Feb 13 18:59:00.747506 systemd-networkd[1503]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 18:59:01.817879 waagent[1878]: 2025-02-13T18:59:01.817820Z INFO Daemon Daemon Provisioning complete Feb 13 18:59:01.845123 waagent[1878]: 2025-02-13T18:59:01.836816Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 18:59:01.845484 waagent[1878]: 2025-02-13T18:59:01.845407Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 18:59:01.856929 waagent[1878]: 2025-02-13T18:59:01.856870Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 18:59:01.993312 waagent[1945]: 2025-02-13T18:59:01.992776Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 18:59:01.993312 waagent[1945]: 2025-02-13T18:59:01.992931Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.1 Feb 13 18:59:01.993312 waagent[1945]: 2025-02-13T18:59:01.992984Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 18:59:02.133531 waagent[1945]: 2025-02-13T18:59:02.133350Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 18:59:02.133664 waagent[1945]: 2025-02-13T18:59:02.133621Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 18:59:02.133727 waagent[1945]: 2025-02-13T18:59:02.133698Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 18:59:02.141577 waagent[1945]: 2025-02-13T18:59:02.141516Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 18:59:02.147373 waagent[1945]: 2025-02-13T18:59:02.147329Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 18:59:02.147887 waagent[1945]: 2025-02-13T18:59:02.147846Z INFO ExtHandler Feb 13 18:59:02.147965 waagent[1945]: 2025-02-13T18:59:02.147931Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6a0fe3b1-6353-4b23-8e84-a3a93d97fb38 eTag: 17710242198871558040 source: Fabric] Feb 13 18:59:02.148261 waagent[1945]: 2025-02-13T18:59:02.148221Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 18:59:02.148852 waagent[1945]: 2025-02-13T18:59:02.148806Z INFO ExtHandler Feb 13 18:59:02.148925 waagent[1945]: 2025-02-13T18:59:02.148892Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 18:59:02.157355 waagent[1945]: 2025-02-13T18:59:02.157312Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 18:59:02.252127 waagent[1945]: 2025-02-13T18:59:02.252025Z INFO ExtHandler Downloaded certificate {'thumbprint': '49CA5F25BD8B42698D3E0B782196954E1B83CEA9', 'hasPrivateKey': False} Feb 13 18:59:02.252589 waagent[1945]: 2025-02-13T18:59:02.252544Z INFO ExtHandler Downloaded certificate {'thumbprint': '2CD95095AAB9076DCA705A8E982D803E70815848', 'hasPrivateKey': True} Feb 13 18:59:02.252998 waagent[1945]: 2025-02-13T18:59:02.252957Z INFO ExtHandler Fetch goal state completed Feb 13 18:59:02.272035 waagent[1945]: 2025-02-13T18:59:02.271980Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1945 Feb 13 18:59:02.272184 waagent[1945]: 2025-02-13T18:59:02.272151Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 18:59:02.273790 waagent[1945]: 2025-02-13T18:59:02.273746Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 18:59:02.274174 waagent[1945]: 2025-02-13T18:59:02.274133Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 18:59:02.306882 waagent[1945]: 2025-02-13T18:59:02.306835Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 18:59:02.307108 waagent[1945]: 2025-02-13T18:59:02.307066Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 18:59:02.313697 waagent[1945]: 2025-02-13T18:59:02.313135Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 18:59:02.319762 systemd[1]: Reloading requested from client PID 1960 ('systemctl') (unit waagent.service)... Feb 13 18:59:02.320018 systemd[1]: Reloading... Feb 13 18:59:02.401840 zram_generator::config[1993]: No configuration found. Feb 13 18:59:02.501549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:59:02.583261 systemd[1]: Reloading finished in 262 ms. Feb 13 18:59:02.607980 waagent[1945]: 2025-02-13T18:59:02.604598Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 18:59:02.611051 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit waagent.service)... Feb 13 18:59:02.611063 systemd[1]: Reloading... Feb 13 18:59:02.685503 zram_generator::config[2082]: No configuration found. Feb 13 18:59:02.778921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:59:02.860615 systemd[1]: Reloading finished in 249 ms. Feb 13 18:59:02.884342 waagent[1945]: 2025-02-13T18:59:02.883571Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 18:59:02.884342 waagent[1945]: 2025-02-13T18:59:02.883743Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 18:59:03.260116 waagent[1945]: 2025-02-13T18:59:03.259993Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 18:59:03.261006 waagent[1945]: 2025-02-13T18:59:03.260960Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 18:59:03.261844 waagent[1945]: 2025-02-13T18:59:03.261794Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 18:59:03.261968 waagent[1945]: 2025-02-13T18:59:03.261922Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 18:59:03.262194 waagent[1945]: 2025-02-13T18:59:03.262146Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 18:59:03.262606 waagent[1945]: 2025-02-13T18:59:03.262549Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 18:59:03.262942 waagent[1945]: 2025-02-13T18:59:03.262883Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 18:59:03.263319 waagent[1945]: 2025-02-13T18:59:03.263261Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 18:59:03.263516 waagent[1945]: 2025-02-13T18:59:03.263462Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 18:59:03.263677 waagent[1945]: 2025-02-13T18:59:03.263598Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 18:59:03.263677 waagent[1945]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 18:59:03.263677 waagent[1945]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 18:59:03.263677 waagent[1945]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 18:59:03.263677 waagent[1945]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 18:59:03.263677 waagent[1945]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 18:59:03.263677 waagent[1945]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 18:59:03.263810 waagent[1945]: 2025-02-13T18:59:03.263691Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 18:59:03.263810 waagent[1945]: 2025-02-13T18:59:03.263762Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 18:59:03.263989 waagent[1945]: 2025-02-13T18:59:03.263889Z INFO EnvHandler ExtHandler Configure routes Feb 13 18:59:03.264072 waagent[1945]: 2025-02-13T18:59:03.264032Z INFO EnvHandler ExtHandler Gateway:None Feb 13 18:59:03.264130 waagent[1945]: 2025-02-13T18:59:03.264104Z INFO EnvHandler ExtHandler Routes:None Feb 13 18:59:03.264528 waagent[1945]: 2025-02-13T18:59:03.264469Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 18:59:03.264672 waagent[1945]: 2025-02-13T18:59:03.264599Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 18:59:03.264805 waagent[1945]: 2025-02-13T18:59:03.264760Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 18:59:03.272045 waagent[1945]: 2025-02-13T18:59:03.271987Z INFO ExtHandler ExtHandler Feb 13 18:59:03.272546 waagent[1945]: 2025-02-13T18:59:03.272489Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: cabd61b2-97d5-40a3-ac65-3c2ded5948ea correlation 84d746fc-b563-4dc7-99c0-cfb667f173ed created: 2025-02-13T18:57:46.223770Z] Feb 13 18:59:03.273155 waagent[1945]: 2025-02-13T18:59:03.273115Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 18:59:03.274053 waagent[1945]: 2025-02-13T18:59:03.273773Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 18:59:03.304649 waagent[1945]: 2025-02-13T18:59:03.304088Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 18:59:03.304649 waagent[1945]: Executing ['ip', '-a', '-o', 'link']: Feb 13 18:59:03.304649 waagent[1945]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 18:59:03.304649 waagent[1945]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:bf:f3 brd ff:ff:ff:ff:ff:ff Feb 13 18:59:03.304649 waagent[1945]: 3: enP25464s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:bf:f3 brd ff:ff:ff:ff:ff:ff\ altname enP25464p0s2 Feb 13 18:59:03.304649 waagent[1945]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 18:59:03.304649 waagent[1945]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 18:59:03.304649 waagent[1945]: 2: eth0 inet 10.200.20.27/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 18:59:03.304649 waagent[1945]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 18:59:03.304649 waagent[1945]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 18:59:03.304649 waagent[1945]: 2: eth0 inet6 fe80::222:48ff:fe7b:bff3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 18:59:03.304649 waagent[1945]: 3: enP25464s1 inet6 fe80::222:48ff:fe7b:bff3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 18:59:03.320128 waagent[1945]: 2025-02-13T18:59:03.320065Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6F760F59-2519-4CBA-8D51-5F8C1B296D4B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 18:59:03.395030 waagent[1945]: 2025-02-13T18:59:03.394936Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 18:59:03.395030 waagent[1945]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 18:59:03.395030 waagent[1945]: pkts bytes target prot opt in out source destination Feb 13 18:59:03.395030 waagent[1945]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 18:59:03.395030 waagent[1945]: pkts bytes target prot opt in out source destination Feb 13 18:59:03.395030 waagent[1945]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 18:59:03.395030 waagent[1945]: pkts bytes target prot opt in out source destination Feb 13 18:59:03.395030 waagent[1945]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 18:59:03.395030 waagent[1945]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 18:59:03.395030 waagent[1945]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 18:59:03.398687 waagent[1945]: 2025-02-13T18:59:03.398615Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 18:59:03.398687 waagent[1945]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 18:59:03.398687 waagent[1945]: pkts bytes target prot opt in out source destination Feb 13 18:59:03.398687 waagent[1945]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 18:59:03.398687 waagent[1945]: pkts bytes target prot opt in out source destination Feb 13 18:59:03.398687 waagent[1945]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 18:59:03.398687 waagent[1945]: pkts bytes target prot opt in out source destination Feb 13 18:59:03.398687 waagent[1945]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 18:59:03.398687 waagent[1945]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 18:59:03.398687 waagent[1945]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 18:59:03.399093 waagent[1945]: 2025-02-13T18:59:03.399040Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 18:59:08.109354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 18:59:08.116658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:59:08.223086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:59:08.227563 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:59:08.286538 kubelet[2177]: E0213 18:59:08.286493 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:59:08.289685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:59:08.289943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:59:18.540297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 18:59:18.548693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:59:18.893979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:59:18.898559 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:59:18.937369 kubelet[2192]: E0213 18:59:18.937326 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:59:18.939266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:59:18.939524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:59:19.992258 chronyd[1709]: Selected source PHC0 Feb 13 18:59:29.162195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 18:59:29.170680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:59:29.488961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:59:29.499791 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:59:29.535574 kubelet[2207]: E0213 18:59:29.535486 2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:59:29.537634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:59:29.537779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:59:39.662200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 18:59:39.672619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:59:39.880252 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 18:59:39.935960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:59:39.940144 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:59:39.978782 kubelet[2222]: E0213 18:59:39.978680 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:59:39.981321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:59:39.981635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:59:41.522560 update_engine[1723]: I20250213 18:59:41.522477 1723 update_attempter.cc:509] Updating boot flags... Feb 13 18:59:41.590510 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2243) Feb 13 18:59:41.692691 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2243) Feb 13 18:59:50.162024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 18:59:50.178706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:59:50.514148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:59:50.518525 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:59:50.554044 kubelet[2350]: E0213 18:59:50.553947 2350 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:59:50.556357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:59:50.556540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:59:53.455460 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 18:59:53.463753 systemd[1]: Started sshd@0-10.200.20.27:22-10.200.16.10:59522.service - OpenSSH per-connection server daemon (10.200.16.10:59522). Feb 13 18:59:54.048687 sshd[2358]: Accepted publickey for core from 10.200.16.10 port 59522 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0 Feb 13 18:59:54.049953 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:54.054174 systemd-logind[1717]: New session 3 of user core. Feb 13 18:59:54.062594 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 18:59:54.470094 systemd[1]: Started sshd@1-10.200.20.27:22-10.200.16.10:59536.service - OpenSSH per-connection server daemon (10.200.16.10:59536). Feb 13 18:59:54.920666 sshd[2363]: Accepted publickey for core from 10.200.16.10 port 59536 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0 Feb 13 18:59:54.921956 sshd-session[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:54.926025 systemd-logind[1717]: New session 4 of user core. Feb 13 18:59:54.934615 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 18:59:55.268339 sshd[2365]: Connection closed by 10.200.16.10 port 59536 Feb 13 18:59:55.268151 sshd-session[2363]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:55.271873 systemd[1]: sshd@1-10.200.20.27:22-10.200.16.10:59536.service: Deactivated successfully. Feb 13 18:59:55.273398 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 18:59:55.275634 systemd-logind[1717]: Session 4 logged out. Waiting for processes to exit. Feb 13 18:59:55.276554 systemd-logind[1717]: Removed session 4. Feb 13 18:59:55.353759 systemd[1]: Started sshd@2-10.200.20.27:22-10.200.16.10:59552.service - OpenSSH per-connection server daemon (10.200.16.10:59552). Feb 13 18:59:55.842235 sshd[2370]: Accepted publickey for core from 10.200.16.10 port 59552 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0 Feb 13 18:59:55.843518 sshd-session[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:55.847287 systemd-logind[1717]: New session 5 of user core. Feb 13 18:59:55.857643 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 18:59:56.203412 sshd[2372]: Connection closed by 10.200.16.10 port 59552 Feb 13 18:59:56.202834 sshd-session[2370]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:56.206520 systemd-logind[1717]: Session 5 logged out. Waiting for processes to exit. Feb 13 18:59:56.206790 systemd[1]: sshd@2-10.200.20.27:22-10.200.16.10:59552.service: Deactivated successfully. Feb 13 18:59:56.208282 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 18:59:56.209151 systemd-logind[1717]: Removed session 5. Feb 13 18:59:56.282843 systemd[1]: Started sshd@3-10.200.20.27:22-10.200.16.10:59562.service - OpenSSH per-connection server daemon (10.200.16.10:59562). Feb 13 18:59:56.739274 sshd[2377]: Accepted publickey for core from 10.200.16.10 port 59562 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0 Feb 13 18:59:56.740600 sshd-session[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:56.745621 systemd-logind[1717]: New session 6 of user core. Feb 13 18:59:56.751709 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 18:59:57.086658 sshd[2379]: Connection closed by 10.200.16.10 port 59562 Feb 13 18:59:57.087428 sshd-session[2377]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:57.090622 systemd[1]: sshd@3-10.200.20.27:22-10.200.16.10:59562.service: Deactivated successfully. Feb 13 18:59:57.092145 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 18:59:57.092806 systemd-logind[1717]: Session 6 logged out. Waiting for processes to exit. Feb 13 18:59:57.093776 systemd-logind[1717]: Removed session 6. Feb 13 18:59:57.172902 systemd[1]: Started sshd@4-10.200.20.27:22-10.200.16.10:59574.service - OpenSSH per-connection server daemon (10.200.16.10:59574). Feb 13 18:59:57.660628 sshd[2384]: Accepted publickey for core from 10.200.16.10 port 59574 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0 Feb 13 18:59:57.661861 sshd-session[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:57.665558 systemd-logind[1717]: New session 7 of user core. Feb 13 18:59:57.671656 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 18:59:58.084743 sudo[2387]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 18:59:58.085018 sudo[2387]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:59:58.118945 sudo[2387]: pam_unix(sudo:session): session closed for user root Feb 13 18:59:58.197848 sshd[2386]: Connection closed by 10.200.16.10 port 59574 Feb 13 18:59:58.197681 sshd-session[2384]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:58.200493 systemd[1]: sshd@4-10.200.20.27:22-10.200.16.10:59574.service: Deactivated successfully. Feb 13 18:59:58.202211 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 18:59:58.203714 systemd-logind[1717]: Session 7 logged out. Waiting for processes to exit. Feb 13 18:59:58.204989 systemd-logind[1717]: Removed session 7. Feb 13 18:59:58.284554 systemd[1]: Started sshd@5-10.200.20.27:22-10.200.16.10:59588.service - OpenSSH per-connection server daemon (10.200.16.10:59588). Feb 13 18:59:58.778317 sshd[2392]: Accepted publickey for core from 10.200.16.10 port 59588 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0 Feb 13 18:59:58.779623 sshd-session[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:58.783319 systemd-logind[1717]: New session 8 of user core. Feb 13 18:59:58.790598 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 18:59:59.052266 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 18:59:59.053091 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:59:59.056202 sudo[2396]: pam_unix(sudo:session): session closed for user root Feb 13 18:59:59.060838 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 18:59:59.061096 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:59:59.082752 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:59:59.104541 augenrules[2418]: No rules Feb 13 18:59:59.105706 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:59:59.106550 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:59:59.108020 sudo[2395]: pam_unix(sudo:session): session closed for user root Feb 13 18:59:59.200958 sshd[2394]: Connection closed by 10.200.16.10 port 59588 Feb 13 18:59:59.201535 sshd-session[2392]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:59.204074 systemd-logind[1717]: Session 8 logged out. Waiting for processes to exit. Feb 13 18:59:59.204324 systemd[1]: sshd@5-10.200.20.27:22-10.200.16.10:59588.service: Deactivated successfully. Feb 13 18:59:59.205956 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 18:59:59.207796 systemd-logind[1717]: Removed session 8. Feb 13 18:59:59.287930 systemd[1]: Started sshd@6-10.200.20.27:22-10.200.16.10:39868.service - OpenSSH per-connection server daemon (10.200.16.10:39868). Feb 13 18:59:59.776610 sshd[2426]: Accepted publickey for core from 10.200.16.10 port 39868 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0 Feb 13 18:59:59.777813 sshd-session[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:59.781538 systemd-logind[1717]: New session 9 of user core. Feb 13 18:59:59.788599 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:00:00.048382 sudo[2429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:00:00.048667 sudo[2429]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:00:00.524652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:00.531684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:00.556861 systemd[1]: Reloading requested from client PID 2461 ('systemctl') (unit session-9.scope)... Feb 13 19:00:00.557008 systemd[1]: Reloading... Feb 13 19:00:00.660470 zram_generator::config[2500]: No configuration found. Feb 13 19:00:00.770461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:00:00.850094 systemd[1]: Reloading finished in 292 ms. Feb 13 19:00:00.896961 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:00:00.897043 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:00:00.897525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:00.904450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:01.046294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:01.057796 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:00:01.094627 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:00:01.094627 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:00:01.094627 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:00:01.108095 kubelet[2567]: I0213 19:00:01.107323 2567 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:00:02.769970 kubelet[2567]: I0213 19:00:02.769931 2567 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:00:02.769970 kubelet[2567]: I0213 19:00:02.769960 2567 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:00:02.770327 kubelet[2567]: I0213 19:00:02.770219 2567 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:00:02.788627 kubelet[2567]: I0213 19:00:02.788424 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:00:02.796742 kubelet[2567]: E0213 19:00:02.796698 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:00:02.796742 kubelet[2567]: I0213 19:00:02.796738 2567 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:00:02.799522 kubelet[2567]: I0213 19:00:02.799489 2567 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:00:02.800567 kubelet[2567]: I0213 19:00:02.800523 2567 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:00:02.800747 kubelet[2567]: I0213 19:00:02.800570 2567 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.27","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:00:02.800834 kubelet[2567]: I0213 19:00:02.800757 2567 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:00:02.800834 kubelet[2567]: I0213 19:00:02.800766 2567 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:00:02.800916 kubelet[2567]: I0213 19:00:02.800896 2567 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:00:02.803820 kubelet[2567]: I0213 19:00:02.803793 2567 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:00:02.803820 kubelet[2567]: I0213 19:00:02.803820 2567 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:00:02.803948 kubelet[2567]: I0213 19:00:02.803838 2567 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:00:02.803948 kubelet[2567]: I0213 19:00:02.803848 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:00:02.805511 kubelet[2567]: E0213 19:00:02.805485 2567 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:02.806027 kubelet[2567]: E0213 19:00:02.805981 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:02.807469 kubelet[2567]: I0213 19:00:02.807390 2567 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:00:02.808248 kubelet[2567]: I0213 19:00:02.808045 2567 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:00:02.808248 kubelet[2567]: W0213 19:00:02.808117 2567 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:00:02.809958 kubelet[2567]: I0213 19:00:02.809772 2567 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:00:02.809958 kubelet[2567]: I0213 19:00:02.809811 2567 server.go:1287] "Started kubelet" Feb 13 19:00:02.812504 kubelet[2567]: I0213 19:00:02.812480 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:00:02.818063 kubelet[2567]: E0213 19:00:02.817919 2567 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.27.1823d9a96dc9e28c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.27,UID:10.200.20.27,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.27,},FirstTimestamp:2025-02-13 19:00:02.809791116 +0000 UTC m=+1.748983491,LastTimestamp:2025-02-13 19:00:02.809791116 +0000 UTC m=+1.748983491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.27,}" Feb 13 19:00:02.819727 kubelet[2567]: I0213 19:00:02.819030 2567 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:00:02.819727 kubelet[2567]: I0213 19:00:02.819510 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:00:02.819849 kubelet[2567]: I0213 19:00:02.819785 2567 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:00:02.820045 kubelet[2567]: I0213 19:00:02.820016 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:00:02.820627 kubelet[2567]: I0213 19:00:02.820611 2567 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:00:02.822285 kubelet[2567]: E0213 19:00:02.821468 2567 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:00:02.822285 kubelet[2567]: I0213 19:00:02.821714 2567 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:00:02.822285 kubelet[2567]: E0213 19:00:02.821897 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:02.823338 kubelet[2567]: I0213 19:00:02.823297 2567 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:00:02.823485 kubelet[2567]: I0213 19:00:02.823475 2567 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:00:02.824796 kubelet[2567]: I0213 19:00:02.824774 2567 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:00:02.824996 kubelet[2567]: I0213 19:00:02.824975 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:00:02.827914 kubelet[2567]: I0213 19:00:02.827891 2567 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:00:02.847814 kubelet[2567]: W0213 19:00:02.847768 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:00:02.847936 kubelet[2567]: E0213 19:00:02.847821 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:00:02.847936 kubelet[2567]: W0213 19:00:02.847877 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.20.27" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:00:02.847936 kubelet[2567]: E0213 19:00:02.847890 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.200.20.27\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:00:02.847936 kubelet[2567]: W0213 19:00:02.847916 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:00:02.847936 kubelet[2567]: E0213 19:00:02.847930 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:00:02.849212 kubelet[2567]: E0213 19:00:02.849067 2567 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.27.1823d9a96e7bd597 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.27,UID:10.200.20.27,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.200.20.27,},FirstTimestamp:2025-02-13 19:00:02.821453207 +0000 UTC m=+1.760645582,LastTimestamp:2025-02-13 19:00:02.821453207 +0000 UTC m=+1.760645582,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.27,}" Feb 13 19:00:02.849212 kubelet[2567]: E0213 19:00:02.849177 2567 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.27\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:00:02.853462 kubelet[2567]: E0213 19:00:02.852721 2567 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.27.1823d9a97043cec2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.27,UID:10.200.20.27,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.200.20.27 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.200.20.27,},FirstTimestamp:2025-02-13 19:00:02.851335874 +0000 UTC m=+1.790528249,LastTimestamp:2025-02-13 19:00:02.851335874 +0000 UTC m=+1.790528249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.27,}" Feb 13 19:00:02.858492 kubelet[2567]: I0213 19:00:02.858455 2567 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:00:02.858918 kubelet[2567]: I0213 19:00:02.858637 2567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:00:02.858918 kubelet[2567]: I0213 19:00:02.858614 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:00:02.858918 kubelet[2567]: I0213 19:00:02.858661 2567 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:00:02.861163 kubelet[2567]: I0213 19:00:02.861127 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:00:02.862560 kubelet[2567]: I0213 19:00:02.861232 2567 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:00:02.862560 kubelet[2567]: I0213 19:00:02.861254 2567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:00:02.862560 kubelet[2567]: I0213 19:00:02.861261 2567 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:00:02.862560 kubelet[2567]: E0213 19:00:02.861390 2567 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:00:02.864143 kubelet[2567]: I0213 19:00:02.864096 2567 policy_none.go:49] "None policy: Start" Feb 13 19:00:02.864143 kubelet[2567]: I0213 19:00:02.864128 2567 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:00:02.864143 kubelet[2567]: I0213 19:00:02.864139 2567 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:00:02.873191 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:00:02.882706 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:00:02.886496 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:00:02.894347 kubelet[2567]: I0213 19:00:02.894170 2567 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:00:02.894492 kubelet[2567]: I0213 19:00:02.894370 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:00:02.894492 kubelet[2567]: I0213 19:00:02.894381 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:00:02.895289 kubelet[2567]: I0213 19:00:02.894767 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:00:02.896253 kubelet[2567]: E0213 19:00:02.895872 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:00:02.896494 kubelet[2567]: E0213 19:00:02.896476 2567 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.27\" not found" Feb 13 19:00:02.996007 kubelet[2567]: I0213 19:00:02.995905 2567 kubelet_node_status.go:76] "Attempting to register node" node="10.200.20.27" Feb 13 19:00:03.006812 kubelet[2567]: I0213 19:00:03.006776 2567 kubelet_node_status.go:79] "Successfully registered node" node="10.200.20.27" Feb 13 19:00:03.006812 kubelet[2567]: E0213 19:00:03.006810 2567 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.200.20.27\": node \"10.200.20.27\" not found" Feb 13 19:00:03.011914 kubelet[2567]: E0213 19:00:03.011883 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.097697 sudo[2429]: pam_unix(sudo:session): session closed for user root Feb 13 19:00:03.112296 kubelet[2567]: E0213 19:00:03.112251 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.176479 sshd[2428]: Connection closed by 10.200.16.10 port 39868 Feb 13 19:00:03.176993 sshd-session[2426]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:03.180544 systemd[1]: sshd@6-10.200.20.27:22-10.200.16.10:39868.service: Deactivated successfully. Feb 13 19:00:03.180732 systemd-logind[1717]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:00:03.183057 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:00:03.183972 systemd-logind[1717]: Removed session 9. Feb 13 19:00:03.212561 kubelet[2567]: E0213 19:00:03.212522 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.313246 kubelet[2567]: E0213 19:00:03.313187 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.413766 kubelet[2567]: E0213 19:00:03.413730 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.514391 kubelet[2567]: E0213 19:00:03.514364 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.614877 kubelet[2567]: E0213 19:00:03.614849 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.715466 kubelet[2567]: E0213 19:00:03.715355 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.771837 kubelet[2567]: I0213 19:00:03.771792 2567 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:00:03.772404 kubelet[2567]: W0213 19:00:03.772372 2567 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:00:03.807111 kubelet[2567]: E0213 19:00:03.807081 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:03.816515 kubelet[2567]: E0213 19:00:03.816474 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:03.916846 kubelet[2567]: E0213 19:00:03.916816 2567 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.20.27\" not found" Feb 13 19:00:04.018360 kubelet[2567]: I0213 19:00:04.018256 2567 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:00:04.019127 kubelet[2567]: I0213 19:00:04.019009 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:00:04.019183 containerd[1753]: time="2025-02-13T19:00:04.018813842Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:00:04.807721 kubelet[2567]: I0213 19:00:04.807681 2567 apiserver.go:52] "Watching apiserver" Feb 13 19:00:04.808058 kubelet[2567]: E0213 19:00:04.807683 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:04.812966 kubelet[2567]: E0213 19:00:04.812324 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:04.822289 systemd[1]: Created slice kubepods-besteffort-pod3fe415d4_f302_450f_91d3_fcb01c83c343.slice - libcontainer container kubepods-besteffort-pod3fe415d4_f302_450f_91d3_fcb01c83c343.slice. Feb 13 19:00:04.825091 kubelet[2567]: I0213 19:00:04.825046 2567 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:00:04.835231 kubelet[2567]: I0213 19:00:04.834713 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c5666df-4229-496f-8e68-a4354f6b8968-kubelet-dir\") pod \"csi-node-driver-x96qx\" (UID: \"4c5666df-4229-496f-8e68-a4354f6b8968\") " pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:04.835231 kubelet[2567]: I0213 19:00:04.834747 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84l95\" (UniqueName: \"kubernetes.io/projected/4c5666df-4229-496f-8e68-a4354f6b8968-kube-api-access-84l95\") pod \"csi-node-driver-x96qx\" (UID: \"4c5666df-4229-496f-8e68-a4354f6b8968\") " pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:04.835231 kubelet[2567]: I0213 19:00:04.834769 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-lib-modules\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.835231 kubelet[2567]: I0213 19:00:04.834788 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-cni-net-dir\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.835231 kubelet[2567]: I0213 19:00:04.834802 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c5666df-4229-496f-8e68-a4354f6b8968-varrun\") pod \"csi-node-driver-x96qx\" (UID: \"4c5666df-4229-496f-8e68-a4354f6b8968\") " pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:04.835628 kubelet[2567]: I0213 19:00:04.834817 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c5666df-4229-496f-8e68-a4354f6b8968-registration-dir\") pod \"csi-node-driver-x96qx\" (UID: \"4c5666df-4229-496f-8e68-a4354f6b8968\") " pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:04.835628 kubelet[2567]: I0213 19:00:04.834834 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7956ccd-d77e-443e-8487-d6974a1e5d76-xtables-lock\") pod \"kube-proxy-cgd9g\" (UID: \"a7956ccd-d77e-443e-8487-d6974a1e5d76\") " pod="kube-system/kube-proxy-cgd9g" Feb 13 19:00:04.835628 kubelet[2567]: I0213 19:00:04.834849 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7956ccd-d77e-443e-8487-d6974a1e5d76-lib-modules\") pod \"kube-proxy-cgd9g\" (UID: \"a7956ccd-d77e-443e-8487-d6974a1e5d76\") " pod="kube-system/kube-proxy-cgd9g" Feb 13 19:00:04.835628 kubelet[2567]: I0213 19:00:04.834867 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe415d4-f302-450f-91d3-fcb01c83c343-tigera-ca-bundle\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.835628 kubelet[2567]: I0213 19:00:04.834882 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c5666df-4229-496f-8e68-a4354f6b8968-socket-dir\") pod \"csi-node-driver-x96qx\" (UID: \"4c5666df-4229-496f-8e68-a4354f6b8968\") " pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:04.837087 kubelet[2567]: I0213 19:00:04.834898 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfbc\" (UniqueName: \"kubernetes.io/projected/a7956ccd-d77e-443e-8487-d6974a1e5d76-kube-api-access-chfbc\") pod \"kube-proxy-cgd9g\" (UID: \"a7956ccd-d77e-443e-8487-d6974a1e5d76\") " pod="kube-system/kube-proxy-cgd9g" Feb 13 19:00:04.837087 kubelet[2567]: I0213 19:00:04.834915 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-xtables-lock\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.837087 kubelet[2567]: I0213 19:00:04.834930 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3fe415d4-f302-450f-91d3-fcb01c83c343-node-certs\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.837087 kubelet[2567]: I0213 19:00:04.834954 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-var-run-calico\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.837087 kubelet[2567]: I0213 19:00:04.834971 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-cni-bin-dir\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.836516 systemd[1]: Created slice kubepods-besteffort-poda7956ccd_d77e_443e_8487_d6974a1e5d76.slice - libcontainer container kubepods-besteffort-poda7956ccd_d77e_443e_8487_d6974a1e5d76.slice. Feb 13 19:00:04.837381 kubelet[2567]: I0213 19:00:04.834986 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-flexvol-driver-host\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.837381 kubelet[2567]: I0213 19:00:04.835008 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tgjd\" (UniqueName: \"kubernetes.io/projected/3fe415d4-f302-450f-91d3-fcb01c83c343-kube-api-access-9tgjd\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.837381 kubelet[2567]: I0213 19:00:04.835024 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7956ccd-d77e-443e-8487-d6974a1e5d76-kube-proxy\") pod \"kube-proxy-cgd9g\" (UID: \"a7956ccd-d77e-443e-8487-d6974a1e5d76\") " pod="kube-system/kube-proxy-cgd9g" Feb 13 19:00:04.837381 kubelet[2567]: I0213 19:00:04.835038 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-policysync\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.837381 kubelet[2567]: I0213 19:00:04.835056 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-var-lib-calico\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.837512 kubelet[2567]: I0213 19:00:04.835074 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3fe415d4-f302-450f-91d3-fcb01c83c343-cni-log-dir\") pod \"calico-node-h8mcp\" (UID: \"3fe415d4-f302-450f-91d3-fcb01c83c343\") " pod="calico-system/calico-node-h8mcp" Feb 13 19:00:04.939859 kubelet[2567]: E0213 19:00:04.939831 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.939859 kubelet[2567]: W0213 19:00:04.939893 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.940189 kubelet[2567]: E0213 19:00:04.939981 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.941419 kubelet[2567]: E0213 19:00:04.940689 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.941419 kubelet[2567]: W0213 19:00:04.940704 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.941419 kubelet[2567]: E0213 19:00:04.941364 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.941935 kubelet[2567]: E0213 19:00:04.941830 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.941935 kubelet[2567]: W0213 19:00:04.941844 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.942205 kubelet[2567]: E0213 19:00:04.942102 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.942205 kubelet[2567]: W0213 19:00:04.942114 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.942412 kubelet[2567]: E0213 19:00:04.942330 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.942412 kubelet[2567]: W0213 19:00:04.942341 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.942796 kubelet[2567]: E0213 19:00:04.942664 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.942796 kubelet[2567]: E0213 19:00:04.942699 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.942796 kubelet[2567]: E0213 19:00:04.942678 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.942796 kubelet[2567]: W0213 19:00:04.942724 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.942796 kubelet[2567]: E0213 19:00:04.942761 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.942796 kubelet[2567]: E0213 19:00:04.942781 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.943011 kubelet[2567]: E0213 19:00:04.942897 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.943011 kubelet[2567]: W0213 19:00:04.942905 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.943011 kubelet[2567]: E0213 19:00:04.942970 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.943143 kubelet[2567]: E0213 19:00:04.943074 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.943143 kubelet[2567]: W0213 19:00:04.943081 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.943143 kubelet[2567]: E0213 19:00:04.943106 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.944474 kubelet[2567]: E0213 19:00:04.943828 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.944474 kubelet[2567]: W0213 19:00:04.943850 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.944474 kubelet[2567]: E0213 19:00:04.943869 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.944474 kubelet[2567]: E0213 19:00:04.944037 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.944474 kubelet[2567]: W0213 19:00:04.944044 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.944474 kubelet[2567]: E0213 19:00:04.944053 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.944474 kubelet[2567]: E0213 19:00:04.944184 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.944474 kubelet[2567]: W0213 19:00:04.944191 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.944474 kubelet[2567]: E0213 19:00:04.944201 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.944474 kubelet[2567]: E0213 19:00:04.944339 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.944753 kubelet[2567]: W0213 19:00:04.944346 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.944753 kubelet[2567]: E0213 19:00:04.944353 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.944813 kubelet[2567]: E0213 19:00:04.944789 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.944813 kubelet[2567]: W0213 19:00:04.944807 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.944861 kubelet[2567]: E0213 19:00:04.944820 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.944973 kubelet[2567]: E0213 19:00:04.944952 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.944973 kubelet[2567]: W0213 19:00:04.944967 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.945040 kubelet[2567]: E0213 19:00:04.944976 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.945158 kubelet[2567]: E0213 19:00:04.945132 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.945158 kubelet[2567]: W0213 19:00:04.945145 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.945158 kubelet[2567]: E0213 19:00:04.945153 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.945299 kubelet[2567]: E0213 19:00:04.945280 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.945299 kubelet[2567]: W0213 19:00:04.945293 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.945366 kubelet[2567]: E0213 19:00:04.945301 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.945429 kubelet[2567]: E0213 19:00:04.945412 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.945429 kubelet[2567]: W0213 19:00:04.945423 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.945429 kubelet[2567]: E0213 19:00:04.945446 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.945731 kubelet[2567]: E0213 19:00:04.945711 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.945731 kubelet[2567]: W0213 19:00:04.945727 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.945813 kubelet[2567]: E0213 19:00:04.945737 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.945893 kubelet[2567]: E0213 19:00:04.945874 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.945893 kubelet[2567]: W0213 19:00:04.945888 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.945952 kubelet[2567]: E0213 19:00:04.945897 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.946503 kubelet[2567]: E0213 19:00:04.946012 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.946503 kubelet[2567]: W0213 19:00:04.946024 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.946503 kubelet[2567]: E0213 19:00:04.946033 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.946503 kubelet[2567]: E0213 19:00:04.946181 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.946503 kubelet[2567]: W0213 19:00:04.946188 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.946503 kubelet[2567]: E0213 19:00:04.946196 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.956571 kubelet[2567]: E0213 19:00:04.955535 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.956571 kubelet[2567]: W0213 19:00:04.955590 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.956571 kubelet[2567]: E0213 19:00:04.955615 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.956571 kubelet[2567]: E0213 19:00:04.956337 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.956571 kubelet[2567]: W0213 19:00:04.956348 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.956571 kubelet[2567]: E0213 19:00:04.956369 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:04.960179 kubelet[2567]: E0213 19:00:04.960091 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:00:04.960179 kubelet[2567]: W0213 19:00:04.960109 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:00:04.960179 kubelet[2567]: E0213 19:00:04.960136 2567 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:00:05.140536 containerd[1753]: time="2025-02-13T19:00:05.135680085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8mcp,Uid:3fe415d4-f302-450f-91d3-fcb01c83c343,Namespace:calico-system,Attempt:0,}" Feb 13 19:00:05.141352 containerd[1753]: time="2025-02-13T19:00:05.141171810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cgd9g,Uid:a7956ccd-d77e-443e-8487-d6974a1e5d76,Namespace:kube-system,Attempt:0,}" Feb 13 19:00:05.808679 kubelet[2567]: E0213 19:00:05.808646 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:05.938728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245814360.mount: Deactivated successfully. Feb 13 19:00:05.954128 containerd[1753]: time="2025-02-13T19:00:05.954081380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:05.963831 containerd[1753]: time="2025-02-13T19:00:05.963766468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:00:05.974948 containerd[1753]: time="2025-02-13T19:00:05.973620757Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:05.979471 containerd[1753]: time="2025-02-13T19:00:05.979152882Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:05.981364 containerd[1753]: time="2025-02-13T19:00:05.981310164Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:00:05.985216 containerd[1753]: time="2025-02-13T19:00:05.985169168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:05.986176 containerd[1753]: time="2025-02-13T19:00:05.985942088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 850.178123ms" Feb 13 19:00:05.991300 containerd[1753]: time="2025-02-13T19:00:05.991246653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 849.999083ms" Feb 13 19:00:06.809050 kubelet[2567]: E0213 19:00:06.808995 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:06.816472 containerd[1753]: time="2025-02-13T19:00:06.816265514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:06.816472 containerd[1753]: time="2025-02-13T19:00:06.816421474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:06.817448 containerd[1753]: time="2025-02-13T19:00:06.816926274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:06.817759 containerd[1753]: time="2025-02-13T19:00:06.817634755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:06.818696 containerd[1753]: time="2025-02-13T19:00:06.818575996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:06.818918 containerd[1753]: time="2025-02-13T19:00:06.818722676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:06.818918 containerd[1753]: time="2025-02-13T19:00:06.818787916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:06.821508 containerd[1753]: time="2025-02-13T19:00:06.819337597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:06.861745 kubelet[2567]: E0213 19:00:06.861702 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:07.080143 systemd[1]: run-containerd-runc-k8s.io-bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35-runc.k66K8E.mount: Deactivated successfully. Feb 13 19:00:07.090672 systemd[1]: Started cri-containerd-bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35.scope - libcontainer container bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35. Feb 13 19:00:07.093760 systemd[1]: Started cri-containerd-f912b4e27a87a5a6a3462d83530b1cf0a5128a468b5628e165836902a1a117c3.scope - libcontainer container f912b4e27a87a5a6a3462d83530b1cf0a5128a468b5628e165836902a1a117c3. Feb 13 19:00:07.125036 containerd[1753]: time="2025-02-13T19:00:07.124544591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8mcp,Uid:3fe415d4-f302-450f-91d3-fcb01c83c343,Namespace:calico-system,Attempt:0,} returns sandbox id \"bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35\"" Feb 13 19:00:07.128897 containerd[1753]: time="2025-02-13T19:00:07.128408194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cgd9g,Uid:a7956ccd-d77e-443e-8487-d6974a1e5d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"f912b4e27a87a5a6a3462d83530b1cf0a5128a468b5628e165836902a1a117c3\"" Feb 13 19:00:07.129313 containerd[1753]: time="2025-02-13T19:00:07.129166515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:00:07.809501 kubelet[2567]: E0213 19:00:07.809422 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:08.809649 kubelet[2567]: E0213 19:00:08.809605 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:08.863008 kubelet[2567]: E0213 19:00:08.862560 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:09.810527 kubelet[2567]: E0213 19:00:09.810479 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:10.811246 kubelet[2567]: E0213 19:00:10.811196 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:10.864010 kubelet[2567]: E0213 19:00:10.863748 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:11.811988 kubelet[2567]: E0213 19:00:11.811931 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:12.812213 kubelet[2567]: E0213 19:00:12.812178 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:12.862939 kubelet[2567]: E0213 19:00:12.862554 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:13.812840 kubelet[2567]: E0213 19:00:13.812785 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:14.813581 kubelet[2567]: E0213 19:00:14.813542 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:14.862715 kubelet[2567]: E0213 19:00:14.862303 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:15.761073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629582197.mount: Deactivated successfully. Feb 13 19:00:15.814506 kubelet[2567]: E0213 19:00:15.814444 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:15.905102 containerd[1753]: time="2025-02-13T19:00:15.904348801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:15.908020 containerd[1753]: time="2025-02-13T19:00:15.907957399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:00:15.911939 containerd[1753]: time="2025-02-13T19:00:15.911874237Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:15.918328 containerd[1753]: time="2025-02-13T19:00:15.918282433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:15.919273 containerd[1753]: time="2025-02-13T19:00:15.918806993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 8.789592918s" Feb 13 19:00:15.919273 containerd[1753]: time="2025-02-13T19:00:15.918848513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:00:15.920590 containerd[1753]: time="2025-02-13T19:00:15.920274752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:00:15.922069 containerd[1753]: time="2025-02-13T19:00:15.921881591Z" level=info msg="CreateContainer within sandbox \"bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:00:15.965290 containerd[1753]: time="2025-02-13T19:00:15.965242406Z" level=info msg="CreateContainer within sandbox \"bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075\"" Feb 13 19:00:15.966472 containerd[1753]: time="2025-02-13T19:00:15.966351645Z" level=info msg="StartContainer for \"a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075\"" Feb 13 19:00:15.997657 systemd[1]: Started cri-containerd-a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075.scope - libcontainer container a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075. Feb 13 19:00:16.035864 containerd[1753]: time="2025-02-13T19:00:16.034908885Z" level=info msg="StartContainer for \"a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075\" returns successfully" Feb 13 19:00:16.041339 systemd[1]: cri-containerd-a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075.scope: Deactivated successfully. Feb 13 19:00:16.132454 containerd[1753]: time="2025-02-13T19:00:16.132345113Z" level=info msg="shim disconnected" id=a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075 namespace=k8s.io Feb 13 19:00:16.132454 containerd[1753]: time="2025-02-13T19:00:16.132429953Z" level=warning msg="cleaning up after shim disconnected" id=a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075 namespace=k8s.io Feb 13 19:00:16.132454 containerd[1753]: time="2025-02-13T19:00:16.132460473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:16.739777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a769e08366a8d3682075168361364aa739fe5415bb789e423824cb30d942d075-rootfs.mount: Deactivated successfully. Feb 13 19:00:16.815317 kubelet[2567]: E0213 19:00:16.815265 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:16.862487 kubelet[2567]: E0213 19:00:16.862001 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:17.054394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954999908.mount: Deactivated successfully. Feb 13 19:00:17.417593 containerd[1753]: time="2025-02-13T19:00:17.417532995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:17.424032 containerd[1753]: time="2025-02-13T19:00:17.423973436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 13 19:00:17.428265 containerd[1753]: time="2025-02-13T19:00:17.428206716Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:17.432970 containerd[1753]: time="2025-02-13T19:00:17.432907477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:17.433712 containerd[1753]: time="2025-02-13T19:00:17.433555637Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.513244805s" Feb 13 19:00:17.433712 containerd[1753]: time="2025-02-13T19:00:17.433595757Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:00:17.435286 containerd[1753]: time="2025-02-13T19:00:17.435240398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:00:17.436482 containerd[1753]: time="2025-02-13T19:00:17.436414038Z" level=info msg="CreateContainer within sandbox \"f912b4e27a87a5a6a3462d83530b1cf0a5128a468b5628e165836902a1a117c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:00:17.491189 containerd[1753]: time="2025-02-13T19:00:17.491136846Z" level=info msg="CreateContainer within sandbox \"f912b4e27a87a5a6a3462d83530b1cf0a5128a468b5628e165836902a1a117c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0baf2aec2f7a219eaac3e6cac310c4da36d448186d1b60e210130b96d8f19700\"" Feb 13 19:00:17.492257 containerd[1753]: time="2025-02-13T19:00:17.492207686Z" level=info msg="StartContainer for \"0baf2aec2f7a219eaac3e6cac310c4da36d448186d1b60e210130b96d8f19700\"" Feb 13 19:00:17.517675 systemd[1]: Started cri-containerd-0baf2aec2f7a219eaac3e6cac310c4da36d448186d1b60e210130b96d8f19700.scope - libcontainer container 0baf2aec2f7a219eaac3e6cac310c4da36d448186d1b60e210130b96d8f19700. Feb 13 19:00:17.549153 containerd[1753]: time="2025-02-13T19:00:17.548955695Z" level=info msg="StartContainer for \"0baf2aec2f7a219eaac3e6cac310c4da36d448186d1b60e210130b96d8f19700\" returns successfully" Feb 13 19:00:17.815912 kubelet[2567]: E0213 19:00:17.815777 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:18.816329 kubelet[2567]: E0213 19:00:18.816291 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:18.863025 kubelet[2567]: E0213 19:00:18.862677 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:19.817329 kubelet[2567]: E0213 19:00:19.817295 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:20.818393 kubelet[2567]: E0213 19:00:20.818354 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:20.862979 kubelet[2567]: E0213 19:00:20.862625 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:21.819525 kubelet[2567]: E0213 19:00:21.819480 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:22.805053 kubelet[2567]: E0213 19:00:22.804952 2567 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:22.820568 kubelet[2567]: E0213 19:00:22.820524 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:22.863066 kubelet[2567]: E0213 19:00:22.862720 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:23.821044 kubelet[2567]: E0213 19:00:23.820987 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:24.822069 kubelet[2567]: E0213 19:00:24.822025 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:24.858775 containerd[1753]: time="2025-02-13T19:00:24.858705844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:24.862068 kubelet[2567]: E0213 19:00:24.862032 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:24.865606 containerd[1753]: time="2025-02-13T19:00:24.865531202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:00:24.874727 containerd[1753]: time="2025-02-13T19:00:24.874656159Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:24.879545 containerd[1753]: time="2025-02-13T19:00:24.879461758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:24.880424 containerd[1753]: time="2025-02-13T19:00:24.879910238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 7.44458636s" Feb 13 19:00:24.880424 containerd[1753]: time="2025-02-13T19:00:24.879943918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:00:24.882320 containerd[1753]: time="2025-02-13T19:00:24.882146917Z" level=info msg="CreateContainer within sandbox \"bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:00:24.926662 containerd[1753]: time="2025-02-13T19:00:24.926612384Z" level=info msg="CreateContainer within sandbox \"bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4\"" Feb 13 19:00:24.927463 containerd[1753]: time="2025-02-13T19:00:24.927127983Z" level=info msg="StartContainer for \"458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4\"" Feb 13 19:00:24.962749 systemd[1]: Started cri-containerd-458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4.scope - libcontainer container 458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4. Feb 13 19:00:24.996419 containerd[1753]: time="2025-02-13T19:00:24.996366402Z" level=info msg="StartContainer for \"458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4\" returns successfully" Feb 13 19:00:25.823214 kubelet[2567]: E0213 19:00:25.823152 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:25.930401 kubelet[2567]: I0213 19:00:25.930260 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cgd9g" podStartSLOduration=12.626659798 podStartE2EDuration="22.930241678s" podCreationTimestamp="2025-02-13 19:00:03 +0000 UTC" firstStartedPulling="2025-02-13 19:00:07.131021637 +0000 UTC m=+6.070214012" lastFinishedPulling="2025-02-13 19:00:17.434603517 +0000 UTC m=+16.373795892" observedRunningTime="2025-02-13 19:00:17.908486672 +0000 UTC m=+16.847679047" watchObservedRunningTime="2025-02-13 19:00:25.930241678 +0000 UTC m=+24.869434053" Feb 13 19:00:25.991844 systemd[1]: cri-containerd-458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4.scope: Deactivated successfully. Feb 13 19:00:26.008933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4-rootfs.mount: Deactivated successfully. Feb 13 19:00:26.085072 kubelet[2567]: I0213 19:00:26.084719 2567 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:00:26.823427 kubelet[2567]: E0213 19:00:26.823378 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:26.867475 systemd[1]: Created slice kubepods-besteffort-pod4c5666df_4229_496f_8e68_a4354f6b8968.slice - libcontainer container kubepods-besteffort-pod4c5666df_4229_496f_8e68_a4354f6b8968.slice. Feb 13 19:00:26.869857 containerd[1753]: time="2025-02-13T19:00:26.869822432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:0,}" Feb 13 19:00:27.776329 containerd[1753]: time="2025-02-13T19:00:27.776096637Z" level=info msg="shim disconnected" id=458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4 namespace=k8s.io Feb 13 19:00:27.776329 containerd[1753]: time="2025-02-13T19:00:27.776156477Z" level=warning msg="cleaning up after shim disconnected" id=458705a54734ab71789952cadbdd2965b6a3c2d7519857ba5f591df2ccc2d2e4 namespace=k8s.io Feb 13 19:00:27.776329 containerd[1753]: time="2025-02-13T19:00:27.776165357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:27.795208 containerd[1753]: time="2025-02-13T19:00:27.795162071Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:00:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:00:27.824417 kubelet[2567]: E0213 19:00:27.824358 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:27.844099 containerd[1753]: time="2025-02-13T19:00:27.844045496Z" level=error msg="Failed to destroy network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:27.844641 containerd[1753]: time="2025-02-13T19:00:27.844605776Z" level=error msg="encountered an error cleaning up failed sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:27.845688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640-shm.mount: Deactivated successfully. Feb 13 19:00:27.846217 containerd[1753]: time="2025-02-13T19:00:27.846073375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:27.846383 kubelet[2567]: E0213 19:00:27.846348 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:27.846478 kubelet[2567]: E0213 19:00:27.846412 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:27.846478 kubelet[2567]: E0213 19:00:27.846452 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:27.846595 kubelet[2567]: E0213 19:00:27.846504 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:27.918417 containerd[1753]: time="2025-02-13T19:00:27.918172953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:00:27.918793 kubelet[2567]: I0213 19:00:27.918394 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640" Feb 13 19:00:27.919457 containerd[1753]: time="2025-02-13T19:00:27.919236953Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:27.919457 containerd[1753]: time="2025-02-13T19:00:27.919453073Z" level=info msg="Ensure that sandbox 18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640 in task-service has been cleanup successfully" Feb 13 19:00:27.921493 containerd[1753]: time="2025-02-13T19:00:27.919656393Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:27.921493 containerd[1753]: time="2025-02-13T19:00:27.919701153Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:27.921033 systemd[1]: run-netns-cni\x2d939e15b3\x2d5fab\x2d8098\x2dfe1d\x2db8250a855c97.mount: Deactivated successfully. Feb 13 19:00:27.922573 containerd[1753]: time="2025-02-13T19:00:27.922346672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:1,}" Feb 13 19:00:28.003940 containerd[1753]: time="2025-02-13T19:00:28.003875607Z" level=error msg="Failed to destroy network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:28.004261 containerd[1753]: time="2025-02-13T19:00:28.004226247Z" level=error msg="encountered an error cleaning up failed sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:28.004318 containerd[1753]: time="2025-02-13T19:00:28.004296847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:28.004595 kubelet[2567]: E0213 19:00:28.004552 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:28.004664 kubelet[2567]: E0213 19:00:28.004619 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:28.004664 kubelet[2567]: E0213 19:00:28.004647 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:28.004733 kubelet[2567]: E0213 19:00:28.004702 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:28.787057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797-shm.mount: Deactivated successfully. Feb 13 19:00:28.824750 kubelet[2567]: E0213 19:00:28.824676 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:28.921773 kubelet[2567]: I0213 19:00:28.921683 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797" Feb 13 19:00:28.922891 containerd[1753]: time="2025-02-13T19:00:28.922569728Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:28.922891 containerd[1753]: time="2025-02-13T19:00:28.922754728Z" level=info msg="Ensure that sandbox b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797 in task-service has been cleanup successfully" Feb 13 19:00:28.925033 systemd[1]: run-netns-cni\x2d439fc2d9\x2d8542\x2d3d88\x2dfc10\x2db174cb04d2ae.mount: Deactivated successfully. Feb 13 19:00:28.925646 containerd[1753]: time="2025-02-13T19:00:28.925167967Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:28.925646 containerd[1753]: time="2025-02-13T19:00:28.925198687Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:28.927242 containerd[1753]: time="2025-02-13T19:00:28.926470087Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:28.927242 containerd[1753]: time="2025-02-13T19:00:28.926596327Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:28.927242 containerd[1753]: time="2025-02-13T19:00:28.926606607Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:28.928337 containerd[1753]: time="2025-02-13T19:00:28.927846166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:2,}" Feb 13 19:00:28.983187 systemd[1]: Created slice kubepods-besteffort-pod1691e8e1_fddf_4f28_8c3f_71dc519ee6e4.slice - libcontainer container kubepods-besteffort-pod1691e8e1_fddf_4f28_8c3f_71dc519ee6e4.slice. Feb 13 19:00:28.996849 kubelet[2567]: I0213 19:00:28.996805 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztddk\" (UniqueName: \"kubernetes.io/projected/1691e8e1-fddf-4f28-8c3f-71dc519ee6e4-kube-api-access-ztddk\") pod \"nginx-deployment-7fcdb87857-bdh6p\" (UID: \"1691e8e1-fddf-4f28-8c3f-71dc519ee6e4\") " pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:29.024650 containerd[1753]: time="2025-02-13T19:00:29.024590097Z" level=error msg="Failed to destroy network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.026081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8-shm.mount: Deactivated successfully. Feb 13 19:00:29.026905 containerd[1753]: time="2025-02-13T19:00:29.026619256Z" level=error msg="encountered an error cleaning up failed sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.026905 containerd[1753]: time="2025-02-13T19:00:29.026695936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.027103 kubelet[2567]: E0213 19:00:29.026974 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.027103 kubelet[2567]: E0213 19:00:29.027035 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:29.027103 kubelet[2567]: E0213 19:00:29.027054 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:29.027368 kubelet[2567]: E0213 19:00:29.027099 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:29.288826 containerd[1753]: time="2025-02-13T19:00:29.288771776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:0,}" Feb 13 19:00:29.378704 containerd[1753]: time="2025-02-13T19:00:29.378490109Z" level=error msg="Failed to destroy network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.379242 containerd[1753]: time="2025-02-13T19:00:29.379036389Z" level=error msg="encountered an error cleaning up failed sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.379242 containerd[1753]: time="2025-02-13T19:00:29.379125789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.379454 kubelet[2567]: E0213 19:00:29.379400 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:29.379512 kubelet[2567]: E0213 19:00:29.379477 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:29.379512 kubelet[2567]: E0213 19:00:29.379501 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:29.379577 kubelet[2567]: E0213 19:00:29.379545 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:29.825144 kubelet[2567]: E0213 19:00:29.825091 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:29.925353 kubelet[2567]: I0213 19:00:29.925314 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8" Feb 13 19:00:29.926490 containerd[1753]: time="2025-02-13T19:00:29.926380983Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:29.928911 containerd[1753]: time="2025-02-13T19:00:29.926754502Z" level=info msg="Ensure that sandbox 3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8 in task-service has been cleanup successfully" Feb 13 19:00:29.929148 containerd[1753]: time="2025-02-13T19:00:29.929116742Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:29.929363 containerd[1753]: time="2025-02-13T19:00:29.929246622Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:29.929731 systemd[1]: run-netns-cni\x2d14fd6f07\x2dae73\x2db6b4\x2d4c9b\x2de4afdd0dc9f1.mount: Deactivated successfully. Feb 13 19:00:29.930119 kubelet[2567]: I0213 19:00:29.930028 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680" Feb 13 19:00:29.931526 containerd[1753]: time="2025-02-13T19:00:29.931490981Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:29.932146 containerd[1753]: time="2025-02-13T19:00:29.931763341Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:29.932146 containerd[1753]: time="2025-02-13T19:00:29.931917781Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:29.932146 containerd[1753]: time="2025-02-13T19:00:29.931964781Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:29.932146 containerd[1753]: time="2025-02-13T19:00:29.931972141Z" level=info msg="Ensure that sandbox 4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680 in task-service has been cleanup successfully" Feb 13 19:00:29.934398 containerd[1753]: time="2025-02-13T19:00:29.932646701Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:29.934398 containerd[1753]: time="2025-02-13T19:00:29.932669341Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:29.934398 containerd[1753]: time="2025-02-13T19:00:29.933473500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:1,}" Feb 13 19:00:29.934398 containerd[1753]: time="2025-02-13T19:00:29.934177540Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:29.934398 containerd[1753]: time="2025-02-13T19:00:29.934309900Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:29.934398 containerd[1753]: time="2025-02-13T19:00:29.934320380Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:29.936273 systemd[1]: run-netns-cni\x2dbd0e4c1c\x2db32a\x2daf53\x2da809\x2db1778b9b3493.mount: Deactivated successfully. Feb 13 19:00:29.938118 containerd[1753]: time="2025-02-13T19:00:29.938080259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:3,}" Feb 13 19:00:30.078841 containerd[1753]: time="2025-02-13T19:00:30.078351416Z" level=error msg="Failed to destroy network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.079625 containerd[1753]: time="2025-02-13T19:00:30.079334056Z" level=error msg="encountered an error cleaning up failed sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.079803 containerd[1753]: time="2025-02-13T19:00:30.079522536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.080129 kubelet[2567]: E0213 19:00:30.080028 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.080129 kubelet[2567]: E0213 19:00:30.080090 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:30.080129 kubelet[2567]: E0213 19:00:30.080109 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:30.080361 kubelet[2567]: E0213 19:00:30.080152 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:30.080824 containerd[1753]: time="2025-02-13T19:00:30.080696016Z" level=error msg="Failed to destroy network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.081239 containerd[1753]: time="2025-02-13T19:00:30.081175015Z" level=error msg="encountered an error cleaning up failed sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.081389 containerd[1753]: time="2025-02-13T19:00:30.081323735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.081728 kubelet[2567]: E0213 19:00:30.081681 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:30.081801 kubelet[2567]: E0213 19:00:30.081743 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:30.081801 kubelet[2567]: E0213 19:00:30.081762 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:30.081855 kubelet[2567]: E0213 19:00:30.081803 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:30.788751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a-shm.mount: Deactivated successfully. Feb 13 19:00:30.826220 kubelet[2567]: E0213 19:00:30.826168 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:30.933942 kubelet[2567]: I0213 19:00:30.933891 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3" Feb 13 19:00:30.934764 containerd[1753]: time="2025-02-13T19:00:30.934714916Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:00:30.935078 containerd[1753]: time="2025-02-13T19:00:30.935022916Z" level=info msg="Ensure that sandbox b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3 in task-service has been cleanup successfully" Feb 13 19:00:30.937609 containerd[1753]: time="2025-02-13T19:00:30.935692035Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:00:30.937609 containerd[1753]: time="2025-02-13T19:00:30.935720595Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:00:30.937815 containerd[1753]: time="2025-02-13T19:00:30.937610195Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:30.937815 containerd[1753]: time="2025-02-13T19:00:30.937717995Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:30.937815 containerd[1753]: time="2025-02-13T19:00:30.937729115Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:30.938802 containerd[1753]: time="2025-02-13T19:00:30.938426115Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:30.938802 containerd[1753]: time="2025-02-13T19:00:30.938538075Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:30.938802 containerd[1753]: time="2025-02-13T19:00:30.938548155Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:30.938675 systemd[1]: run-netns-cni\x2df1d31c47\x2d1905\x2d5a04\x2d5c3e\x2d0882c0dd4c53.mount: Deactivated successfully. Feb 13 19:00:30.939171 containerd[1753]: time="2025-02-13T19:00:30.939009874Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:30.939171 containerd[1753]: time="2025-02-13T19:00:30.939091554Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:30.939224 containerd[1753]: time="2025-02-13T19:00:30.939101234Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:30.939883 kubelet[2567]: I0213 19:00:30.939547 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a" Feb 13 19:00:30.940750 containerd[1753]: time="2025-02-13T19:00:30.940252914Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:00:30.940750 containerd[1753]: time="2025-02-13T19:00:30.940373994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:4,}" Feb 13 19:00:30.940750 containerd[1753]: time="2025-02-13T19:00:30.940566354Z" level=info msg="Ensure that sandbox 9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a in task-service has been cleanup successfully" Feb 13 19:00:30.940885 containerd[1753]: time="2025-02-13T19:00:30.940784074Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:00:30.940885 containerd[1753]: time="2025-02-13T19:00:30.940801914Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:00:30.941861 containerd[1753]: time="2025-02-13T19:00:30.941810034Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:30.941979 containerd[1753]: time="2025-02-13T19:00:30.941913874Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:30.941979 containerd[1753]: time="2025-02-13T19:00:30.941931474Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:30.943591 systemd[1]: run-netns-cni\x2d9d3b1d24\x2d5ef6\x2d531f\x2db17a\x2da859a396140a.mount: Deactivated successfully. Feb 13 19:00:30.944462 containerd[1753]: time="2025-02-13T19:00:30.944227153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:2,}" Feb 13 19:00:31.059110 containerd[1753]: time="2025-02-13T19:00:31.058952798Z" level=error msg="Failed to destroy network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.060121 containerd[1753]: time="2025-02-13T19:00:31.059966398Z" level=error msg="encountered an error cleaning up failed sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.060218 containerd[1753]: time="2025-02-13T19:00:31.060182838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.060584 kubelet[2567]: E0213 19:00:31.060548 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.061086 kubelet[2567]: E0213 19:00:31.060736 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:31.061086 kubelet[2567]: E0213 19:00:31.060787 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:31.061086 kubelet[2567]: E0213 19:00:31.060832 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:31.068654 containerd[1753]: time="2025-02-13T19:00:31.068586675Z" level=error msg="Failed to destroy network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.069014 containerd[1753]: time="2025-02-13T19:00:31.068968115Z" level=error msg="encountered an error cleaning up failed sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.069092 containerd[1753]: time="2025-02-13T19:00:31.069053275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.069368 kubelet[2567]: E0213 19:00:31.069328 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:31.069826 kubelet[2567]: E0213 19:00:31.069509 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:31.069826 kubelet[2567]: E0213 19:00:31.069535 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:31.069826 kubelet[2567]: E0213 19:00:31.069580 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:31.788120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd-shm.mount: Deactivated successfully. Feb 13 19:00:31.827059 kubelet[2567]: E0213 19:00:31.827010 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:31.944631 kubelet[2567]: I0213 19:00:31.944329 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd" Feb 13 19:00:31.945922 containerd[1753]: time="2025-02-13T19:00:31.945868128Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:00:31.946765 containerd[1753]: time="2025-02-13T19:00:31.946057848Z" level=info msg="Ensure that sandbox 282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd in task-service has been cleanup successfully" Feb 13 19:00:31.947532 systemd[1]: run-netns-cni\x2d401a1fa3\x2d2166\x2dc3aa\x2dbe62\x2d9e02fd76a353.mount: Deactivated successfully. Feb 13 19:00:31.949192 containerd[1753]: time="2025-02-13T19:00:31.948659807Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:00:31.949192 containerd[1753]: time="2025-02-13T19:00:31.948694927Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:00:31.950504 containerd[1753]: time="2025-02-13T19:00:31.949948007Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:00:31.950504 containerd[1753]: time="2025-02-13T19:00:31.950146607Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:00:31.950504 containerd[1753]: time="2025-02-13T19:00:31.950159887Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:00:31.950766 kubelet[2567]: I0213 19:00:31.950134 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e" Feb 13 19:00:31.951234 containerd[1753]: time="2025-02-13T19:00:31.951010607Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:00:31.951312 containerd[1753]: time="2025-02-13T19:00:31.951242047Z" level=info msg="Ensure that sandbox e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e in task-service has been cleanup successfully" Feb 13 19:00:31.952412 containerd[1753]: time="2025-02-13T19:00:31.951823006Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:00:31.952412 containerd[1753]: time="2025-02-13T19:00:31.951853166Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:00:31.953120 systemd[1]: run-netns-cni\x2d583b0527\x2d6701\x2d73db\x2d3da5\x2d77cfcf25ca24.mount: Deactivated successfully. Feb 13 19:00:31.954136 containerd[1753]: time="2025-02-13T19:00:31.954089566Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:31.954228 containerd[1753]: time="2025-02-13T19:00:31.954215406Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:31.954253 containerd[1753]: time="2025-02-13T19:00:31.954227406Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:31.956400 containerd[1753]: time="2025-02-13T19:00:31.955938565Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:31.956400 containerd[1753]: time="2025-02-13T19:00:31.956020205Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:00:31.956400 containerd[1753]: time="2025-02-13T19:00:31.956076005Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:31.956400 containerd[1753]: time="2025-02-13T19:00:31.956099805Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:31.956400 containerd[1753]: time="2025-02-13T19:00:31.956109565Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:00:31.956400 containerd[1753]: time="2025-02-13T19:00:31.956121845Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:00:31.957678 containerd[1753]: time="2025-02-13T19:00:31.956982885Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:31.957678 containerd[1753]: time="2025-02-13T19:00:31.957101965Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:31.957678 containerd[1753]: time="2025-02-13T19:00:31.957112045Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:31.957678 containerd[1753]: time="2025-02-13T19:00:31.957186605Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:31.957678 containerd[1753]: time="2025-02-13T19:00:31.957239325Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:31.957678 containerd[1753]: time="2025-02-13T19:00:31.957247965Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:31.958650 containerd[1753]: time="2025-02-13T19:00:31.958609524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:3,}" Feb 13 19:00:31.959815 containerd[1753]: time="2025-02-13T19:00:31.958743764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:5,}" Feb 13 19:00:32.113079 containerd[1753]: time="2025-02-13T19:00:32.112794565Z" level=error msg="Failed to destroy network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.115181 containerd[1753]: time="2025-02-13T19:00:32.113507005Z" level=error msg="encountered an error cleaning up failed sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.115181 containerd[1753]: time="2025-02-13T19:00:32.115017446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.115569 kubelet[2567]: E0213 19:00:32.115521 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.116910 kubelet[2567]: E0213 19:00:32.116564 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:32.116910 kubelet[2567]: E0213 19:00:32.116606 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:32.116910 kubelet[2567]: E0213 19:00:32.116664 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:32.134607 containerd[1753]: time="2025-02-13T19:00:32.134556289Z" level=error msg="Failed to destroy network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.135788 containerd[1753]: time="2025-02-13T19:00:32.135705929Z" level=error msg="encountered an error cleaning up failed sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.135946 containerd[1753]: time="2025-02-13T19:00:32.135872169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.136518 kubelet[2567]: E0213 19:00:32.136350 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:32.136613 kubelet[2567]: E0213 19:00:32.136539 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:32.136741 kubelet[2567]: E0213 19:00:32.136706 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:32.137068 kubelet[2567]: E0213 19:00:32.136875 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:32.788232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786-shm.mount: Deactivated successfully. Feb 13 19:00:32.827384 kubelet[2567]: E0213 19:00:32.827343 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:32.954576 kubelet[2567]: I0213 19:00:32.953799 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786" Feb 13 19:00:32.954760 containerd[1753]: time="2025-02-13T19:00:32.954705176Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:00:32.955105 containerd[1753]: time="2025-02-13T19:00:32.954894176Z" level=info msg="Ensure that sandbox 5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786 in task-service has been cleanup successfully" Feb 13 19:00:32.956478 systemd[1]: run-netns-cni\x2d0e1d8cc3\x2d6b24\x2da75a\x2d9489\x2d01b9414f44f1.mount: Deactivated successfully. Feb 13 19:00:32.956909 containerd[1753]: time="2025-02-13T19:00:32.956820216Z" level=info msg="TearDown network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" successfully" Feb 13 19:00:32.956909 containerd[1753]: time="2025-02-13T19:00:32.956896776Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" returns successfully" Feb 13 19:00:32.958302 containerd[1753]: time="2025-02-13T19:00:32.958263737Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:00:32.958383 containerd[1753]: time="2025-02-13T19:00:32.958364217Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:00:32.958383 containerd[1753]: time="2025-02-13T19:00:32.958374497Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:00:32.958951 containerd[1753]: time="2025-02-13T19:00:32.958795177Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:00:32.958951 containerd[1753]: time="2025-02-13T19:00:32.958887657Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:00:32.958951 containerd[1753]: time="2025-02-13T19:00:32.958897777Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:00:32.959516 containerd[1753]: time="2025-02-13T19:00:32.959492337Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:32.959745 containerd[1753]: time="2025-02-13T19:00:32.959660257Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:32.959745 containerd[1753]: time="2025-02-13T19:00:32.959678537Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:32.960173 containerd[1753]: time="2025-02-13T19:00:32.960138377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:4,}" Feb 13 19:00:32.962033 kubelet[2567]: I0213 19:00:32.961999 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e" Feb 13 19:00:32.962910 containerd[1753]: time="2025-02-13T19:00:32.962840897Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:00:32.963229 containerd[1753]: time="2025-02-13T19:00:32.963131817Z" level=info msg="Ensure that sandbox b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e in task-service has been cleanup successfully" Feb 13 19:00:32.963553 containerd[1753]: time="2025-02-13T19:00:32.963472297Z" level=info msg="TearDown network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" successfully" Feb 13 19:00:32.963553 containerd[1753]: time="2025-02-13T19:00:32.963493937Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" returns successfully" Feb 13 19:00:32.964934 systemd[1]: run-netns-cni\x2d2f0f9fbb\x2dff1b\x2d3847\x2d97c4\x2d99a1ee56ae95.mount: Deactivated successfully. Feb 13 19:00:32.966030 containerd[1753]: time="2025-02-13T19:00:32.965990978Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:00:32.966129 containerd[1753]: time="2025-02-13T19:00:32.966107658Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:00:32.966129 containerd[1753]: time="2025-02-13T19:00:32.966118338Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:00:32.967344 containerd[1753]: time="2025-02-13T19:00:32.967301658Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:00:32.967684 containerd[1753]: time="2025-02-13T19:00:32.967508658Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:00:32.967684 containerd[1753]: time="2025-02-13T19:00:32.967523298Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:00:32.967882 containerd[1753]: time="2025-02-13T19:00:32.967859698Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:32.968101 containerd[1753]: time="2025-02-13T19:00:32.967992458Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:32.968101 containerd[1753]: time="2025-02-13T19:00:32.968027498Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:32.968340 containerd[1753]: time="2025-02-13T19:00:32.968310058Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:32.968504 containerd[1753]: time="2025-02-13T19:00:32.968398778Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:32.968504 containerd[1753]: time="2025-02-13T19:00:32.968409418Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:32.969314 containerd[1753]: time="2025-02-13T19:00:32.969158418Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:32.969314 containerd[1753]: time="2025-02-13T19:00:32.969245778Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:32.969314 containerd[1753]: time="2025-02-13T19:00:32.969255658Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:32.969761 containerd[1753]: time="2025-02-13T19:00:32.969728298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:6,}" Feb 13 19:00:33.244323 containerd[1753]: time="2025-02-13T19:00:33.244262461Z" level=error msg="Failed to destroy network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.245992 containerd[1753]: time="2025-02-13T19:00:33.245792741Z" level=error msg="encountered an error cleaning up failed sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.245992 containerd[1753]: time="2025-02-13T19:00:33.245906701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.247011 kubelet[2567]: E0213 19:00:33.246391 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.247011 kubelet[2567]: E0213 19:00:33.246463 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:33.247011 kubelet[2567]: E0213 19:00:33.246485 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:33.247177 kubelet[2567]: E0213 19:00:33.246543 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:33.259671 containerd[1753]: time="2025-02-13T19:00:33.259602583Z" level=error msg="Failed to destroy network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.260018 containerd[1753]: time="2025-02-13T19:00:33.259953784Z" level=error msg="encountered an error cleaning up failed sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.260180 containerd[1753]: time="2025-02-13T19:00:33.260033704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.260339 kubelet[2567]: E0213 19:00:33.260265 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:33.260339 kubelet[2567]: E0213 19:00:33.260329 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:33.260559 kubelet[2567]: E0213 19:00:33.260351 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:33.260559 kubelet[2567]: E0213 19:00:33.260390 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:33.790177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395-shm.mount: Deactivated successfully. Feb 13 19:00:33.827928 kubelet[2567]: E0213 19:00:33.827756 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:33.968530 kubelet[2567]: I0213 19:00:33.968489 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531" Feb 13 19:00:33.969583 containerd[1753]: time="2025-02-13T19:00:33.969545334Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" Feb 13 19:00:33.970239 containerd[1753]: time="2025-02-13T19:00:33.969849694Z" level=info msg="Ensure that sandbox 37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531 in task-service has been cleanup successfully" Feb 13 19:00:33.971851 containerd[1753]: time="2025-02-13T19:00:33.970358854Z" level=info msg="TearDown network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" successfully" Feb 13 19:00:33.971851 containerd[1753]: time="2025-02-13T19:00:33.970381494Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" returns successfully" Feb 13 19:00:33.973465 containerd[1753]: time="2025-02-13T19:00:33.972508334Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:00:33.973465 containerd[1753]: time="2025-02-13T19:00:33.972784054Z" level=info msg="TearDown network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" successfully" Feb 13 19:00:33.973465 containerd[1753]: time="2025-02-13T19:00:33.972799254Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" returns successfully" Feb 13 19:00:33.973465 containerd[1753]: time="2025-02-13T19:00:33.973222454Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:00:33.973465 containerd[1753]: time="2025-02-13T19:00:33.973317094Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:00:33.973465 containerd[1753]: time="2025-02-13T19:00:33.973326574Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:00:33.972939 systemd[1]: run-netns-cni\x2d799aa266\x2d7b2f\x2dc8a2\x2dccd1\x2d0ed7f2558c6d.mount: Deactivated successfully. Feb 13 19:00:33.975017 containerd[1753]: time="2025-02-13T19:00:33.974676855Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:00:33.975017 containerd[1753]: time="2025-02-13T19:00:33.974794255Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:00:33.975017 containerd[1753]: time="2025-02-13T19:00:33.974805655Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:00:33.975706 containerd[1753]: time="2025-02-13T19:00:33.975608735Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:33.975880 containerd[1753]: time="2025-02-13T19:00:33.975846255Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:33.975880 containerd[1753]: time="2025-02-13T19:00:33.975870375Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:33.976476 containerd[1753]: time="2025-02-13T19:00:33.976121735Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:33.976476 containerd[1753]: time="2025-02-13T19:00:33.976192255Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:33.976476 containerd[1753]: time="2025-02-13T19:00:33.976201455Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:33.976577 kubelet[2567]: I0213 19:00:33.976225 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395" Feb 13 19:00:33.977353 containerd[1753]: time="2025-02-13T19:00:33.976894855Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:33.977353 containerd[1753]: time="2025-02-13T19:00:33.977106335Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:33.977353 containerd[1753]: time="2025-02-13T19:00:33.977121615Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:33.978145 containerd[1753]: time="2025-02-13T19:00:33.977723815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:7,}" Feb 13 19:00:33.978145 containerd[1753]: time="2025-02-13T19:00:33.977963575Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" Feb 13 19:00:33.978347 containerd[1753]: time="2025-02-13T19:00:33.978113775Z" level=info msg="Ensure that sandbox 4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395 in task-service has been cleanup successfully" Feb 13 19:00:33.980192 systemd[1]: run-netns-cni\x2d11680174\x2db5cc\x2da2a6\x2d0a03\x2da4074d471206.mount: Deactivated successfully. Feb 13 19:00:33.980489 containerd[1753]: time="2025-02-13T19:00:33.980427255Z" level=info msg="TearDown network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" successfully" Feb 13 19:00:33.980573 containerd[1753]: time="2025-02-13T19:00:33.980559735Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" returns successfully" Feb 13 19:00:33.982421 containerd[1753]: time="2025-02-13T19:00:33.982377056Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:00:33.982911 containerd[1753]: time="2025-02-13T19:00:33.982719696Z" level=info msg="TearDown network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" successfully" Feb 13 19:00:33.982911 containerd[1753]: time="2025-02-13T19:00:33.982740896Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" returns successfully" Feb 13 19:00:33.983794 containerd[1753]: time="2025-02-13T19:00:33.983767496Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:00:33.984021 containerd[1753]: time="2025-02-13T19:00:33.984005056Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:00:33.984221 containerd[1753]: time="2025-02-13T19:00:33.984203056Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:00:33.984845 containerd[1753]: time="2025-02-13T19:00:33.984809936Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:00:33.985235 containerd[1753]: time="2025-02-13T19:00:33.985024936Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:00:33.985235 containerd[1753]: time="2025-02-13T19:00:33.985044456Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:00:33.985891 containerd[1753]: time="2025-02-13T19:00:33.985867016Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:33.986367 containerd[1753]: time="2025-02-13T19:00:33.986111216Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:33.986367 containerd[1753]: time="2025-02-13T19:00:33.986127616Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:33.987093 containerd[1753]: time="2025-02-13T19:00:33.987065856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:5,}" Feb 13 19:00:34.149374 containerd[1753]: time="2025-02-13T19:00:34.149242362Z" level=error msg="Failed to destroy network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.151414 containerd[1753]: time="2025-02-13T19:00:34.150751042Z" level=error msg="encountered an error cleaning up failed sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.151414 containerd[1753]: time="2025-02-13T19:00:34.150827682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.151595 kubelet[2567]: E0213 19:00:34.151027 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.151595 kubelet[2567]: E0213 19:00:34.151080 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:34.151595 kubelet[2567]: E0213 19:00:34.151109 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:34.151673 kubelet[2567]: E0213 19:00:34.151157 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:34.159085 containerd[1753]: time="2025-02-13T19:00:34.158857523Z" level=error msg="Failed to destroy network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.160394 containerd[1753]: time="2025-02-13T19:00:34.160068603Z" level=error msg="encountered an error cleaning up failed sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.160783 containerd[1753]: time="2025-02-13T19:00:34.160578443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.161651 kubelet[2567]: E0213 19:00:34.161335 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:34.161651 kubelet[2567]: E0213 19:00:34.161395 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:34.161651 kubelet[2567]: E0213 19:00:34.161414 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:34.161928 kubelet[2567]: E0213 19:00:34.161482 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:34.790095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f-shm.mount: Deactivated successfully. Feb 13 19:00:34.790415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894-shm.mount: Deactivated successfully. Feb 13 19:00:34.828820 kubelet[2567]: E0213 19:00:34.828762 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:34.983972 kubelet[2567]: I0213 19:00:34.983913 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f" Feb 13 19:00:34.984937 containerd[1753]: time="2025-02-13T19:00:34.984878491Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\"" Feb 13 19:00:34.986518 containerd[1753]: time="2025-02-13T19:00:34.985083011Z" level=info msg="Ensure that sandbox efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f in task-service has been cleanup successfully" Feb 13 19:00:34.987255 containerd[1753]: time="2025-02-13T19:00:34.986687812Z" level=info msg="TearDown network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" successfully" Feb 13 19:00:34.987255 containerd[1753]: time="2025-02-13T19:00:34.986729172Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" returns successfully" Feb 13 19:00:34.988579 containerd[1753]: time="2025-02-13T19:00:34.987786332Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" Feb 13 19:00:34.988579 containerd[1753]: time="2025-02-13T19:00:34.987907892Z" level=info msg="TearDown network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" successfully" Feb 13 19:00:34.988579 containerd[1753]: time="2025-02-13T19:00:34.987918812Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" returns successfully" Feb 13 19:00:34.988180 systemd[1]: run-netns-cni\x2d337b2b93\x2da7de\x2d4659\x2dac5a\x2de50c03fb0e27.mount: Deactivated successfully. Feb 13 19:00:34.990330 containerd[1753]: time="2025-02-13T19:00:34.989626012Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:00:34.990330 containerd[1753]: time="2025-02-13T19:00:34.989733252Z" level=info msg="TearDown network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" successfully" Feb 13 19:00:34.990330 containerd[1753]: time="2025-02-13T19:00:34.989743852Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" returns successfully" Feb 13 19:00:34.990884 containerd[1753]: time="2025-02-13T19:00:34.990817492Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:00:34.991966 containerd[1753]: time="2025-02-13T19:00:34.991654092Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:00:34.991966 containerd[1753]: time="2025-02-13T19:00:34.991683492Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:00:34.992498 containerd[1753]: time="2025-02-13T19:00:34.992194133Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:00:34.992498 containerd[1753]: time="2025-02-13T19:00:34.992280813Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:00:34.992498 containerd[1753]: time="2025-02-13T19:00:34.992293653Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:00:34.994361 containerd[1753]: time="2025-02-13T19:00:34.994318053Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:34.994519 containerd[1753]: time="2025-02-13T19:00:34.994460013Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:34.994519 containerd[1753]: time="2025-02-13T19:00:34.994471933Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:34.995216 containerd[1753]: time="2025-02-13T19:00:34.995183933Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:34.995348 containerd[1753]: time="2025-02-13T19:00:34.995284893Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:34.995348 containerd[1753]: time="2025-02-13T19:00:34.995300253Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:34.995870 kubelet[2567]: I0213 19:00:34.995805 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894" Feb 13 19:00:34.997417 containerd[1753]: time="2025-02-13T19:00:34.997222453Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\"" Feb 13 19:00:34.997525 containerd[1753]: time="2025-02-13T19:00:34.997422973Z" level=info msg="Ensure that sandbox 2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894 in task-service has been cleanup successfully" Feb 13 19:00:34.998230 containerd[1753]: time="2025-02-13T19:00:34.998188853Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:34.998316 containerd[1753]: time="2025-02-13T19:00:34.998296174Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:34.998316 containerd[1753]: time="2025-02-13T19:00:34.998311214Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:35.000705 containerd[1753]: time="2025-02-13T19:00:35.000648734Z" level=info msg="TearDown network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" successfully" Feb 13 19:00:35.000705 containerd[1753]: time="2025-02-13T19:00:35.000690934Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" returns successfully" Feb 13 19:00:35.001150 systemd[1]: run-netns-cni\x2d03260e8a\x2d14be\x2d06e3\x2dae5c\x2d53821ed8757c.mount: Deactivated successfully. Feb 13 19:00:35.001750 containerd[1753]: time="2025-02-13T19:00:35.001661054Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" Feb 13 19:00:35.002526 containerd[1753]: time="2025-02-13T19:00:35.001785414Z" level=info msg="TearDown network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" successfully" Feb 13 19:00:35.002526 containerd[1753]: time="2025-02-13T19:00:35.001796494Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" returns successfully" Feb 13 19:00:35.002526 containerd[1753]: time="2025-02-13T19:00:35.001915334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:8,}" Feb 13 19:00:35.007502 containerd[1753]: time="2025-02-13T19:00:35.007258255Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:00:35.008120 containerd[1753]: time="2025-02-13T19:00:35.007992135Z" level=info msg="TearDown network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" successfully" Feb 13 19:00:35.008120 containerd[1753]: time="2025-02-13T19:00:35.008019135Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" returns successfully" Feb 13 19:00:35.009320 containerd[1753]: time="2025-02-13T19:00:35.009285655Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:00:35.010236 containerd[1753]: time="2025-02-13T19:00:35.010166135Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:00:35.010236 containerd[1753]: time="2025-02-13T19:00:35.010199775Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:00:35.012742 containerd[1753]: time="2025-02-13T19:00:35.011409096Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:00:35.012742 containerd[1753]: time="2025-02-13T19:00:35.011552456Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:00:35.012742 containerd[1753]: time="2025-02-13T19:00:35.011576976Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:00:35.013600 containerd[1753]: time="2025-02-13T19:00:35.013416656Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:35.013711 containerd[1753]: time="2025-02-13T19:00:35.013686416Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:35.013711 containerd[1753]: time="2025-02-13T19:00:35.013708216Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:35.016308 containerd[1753]: time="2025-02-13T19:00:35.016249816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:6,}" Feb 13 19:00:35.587623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461979384.mount: Deactivated successfully. Feb 13 19:00:35.781076 containerd[1753]: time="2025-02-13T19:00:35.780991535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:35.786854 containerd[1753]: time="2025-02-13T19:00:35.786703176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:00:35.792363 containerd[1753]: time="2025-02-13T19:00:35.791393817Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:35.798762 containerd[1753]: time="2025-02-13T19:00:35.798703658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:35.799422 containerd[1753]: time="2025-02-13T19:00:35.799380618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.881164945s" Feb 13 19:00:35.799422 containerd[1753]: time="2025-02-13T19:00:35.799418858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:00:35.817598 containerd[1753]: time="2025-02-13T19:00:35.817534301Z" level=info msg="CreateContainer within sandbox \"bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:00:35.830486 kubelet[2567]: E0213 19:00:35.830417 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:35.834131 containerd[1753]: time="2025-02-13T19:00:35.833934663Z" level=error msg="Failed to destroy network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.836236 containerd[1753]: time="2025-02-13T19:00:35.834540863Z" level=error msg="encountered an error cleaning up failed sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.836236 containerd[1753]: time="2025-02-13T19:00:35.834610383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.836412 kubelet[2567]: E0213 19:00:35.834845 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.836412 kubelet[2567]: E0213 19:00:35.834904 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:35.836412 kubelet[2567]: E0213 19:00:35.834925 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:35.836524 kubelet[2567]: E0213 19:00:35.834962 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:35.837406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865-shm.mount: Deactivated successfully. Feb 13 19:00:35.868377 containerd[1753]: time="2025-02-13T19:00:35.868168789Z" level=error msg="Failed to destroy network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.869488 containerd[1753]: time="2025-02-13T19:00:35.869113109Z" level=error msg="encountered an error cleaning up failed sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.869488 containerd[1753]: time="2025-02-13T19:00:35.869197429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.869699 kubelet[2567]: E0213 19:00:35.869515 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:35.869699 kubelet[2567]: E0213 19:00:35.869596 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:35.869699 kubelet[2567]: E0213 19:00:35.869631 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:35.869784 kubelet[2567]: E0213 19:00:35.869687 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:35.876281 containerd[1753]: time="2025-02-13T19:00:35.876130470Z" level=info msg="CreateContainer within sandbox \"bfe7b03f0714e3a8d84d4cfa84ed9c7ad07feaf33433532c987afd6b6efaad35\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"917a96f781b1381e977fb8f825806a516c73c04b630425a712deafb92208b245\"" Feb 13 19:00:35.877519 containerd[1753]: time="2025-02-13T19:00:35.876771750Z" level=info msg="StartContainer for \"917a96f781b1381e977fb8f825806a516c73c04b630425a712deafb92208b245\"" Feb 13 19:00:35.903651 systemd[1]: Started cri-containerd-917a96f781b1381e977fb8f825806a516c73c04b630425a712deafb92208b245.scope - libcontainer container 917a96f781b1381e977fb8f825806a516c73c04b630425a712deafb92208b245. Feb 13 19:00:35.937524 containerd[1753]: time="2025-02-13T19:00:35.937380519Z" level=info msg="StartContainer for \"917a96f781b1381e977fb8f825806a516c73c04b630425a712deafb92208b245\" returns successfully" Feb 13 19:00:36.001762 kubelet[2567]: I0213 19:00:36.000970 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd" Feb 13 19:00:36.001966 containerd[1753]: time="2025-02-13T19:00:36.001868449Z" level=info msg="StopPodSandbox for \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\"" Feb 13 19:00:36.002298 containerd[1753]: time="2025-02-13T19:00:36.002080609Z" level=info msg="Ensure that sandbox 9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd in task-service has been cleanup successfully" Feb 13 19:00:36.002298 containerd[1753]: time="2025-02-13T19:00:36.002280529Z" level=info msg="TearDown network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" successfully" Feb 13 19:00:36.002298 containerd[1753]: time="2025-02-13T19:00:36.002295889Z" level=info msg="StopPodSandbox for \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" returns successfully" Feb 13 19:00:36.002822 containerd[1753]: time="2025-02-13T19:00:36.002788570Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\"" Feb 13 19:00:36.002893 containerd[1753]: time="2025-02-13T19:00:36.002884290Z" level=info msg="TearDown network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" successfully" Feb 13 19:00:36.002930 containerd[1753]: time="2025-02-13T19:00:36.002895570Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" returns successfully" Feb 13 19:00:36.004313 containerd[1753]: time="2025-02-13T19:00:36.004219570Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" Feb 13 19:00:36.004476 containerd[1753]: time="2025-02-13T19:00:36.004322010Z" level=info msg="TearDown network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" successfully" Feb 13 19:00:36.004476 containerd[1753]: time="2025-02-13T19:00:36.004332730Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" returns successfully" Feb 13 19:00:36.005566 containerd[1753]: time="2025-02-13T19:00:36.005519170Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:00:36.005632 containerd[1753]: time="2025-02-13T19:00:36.005623450Z" level=info msg="TearDown network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" successfully" Feb 13 19:00:36.005656 containerd[1753]: time="2025-02-13T19:00:36.005633690Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" returns successfully" Feb 13 19:00:36.007280 containerd[1753]: time="2025-02-13T19:00:36.006887690Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:00:36.007280 containerd[1753]: time="2025-02-13T19:00:36.006990850Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:00:36.007280 containerd[1753]: time="2025-02-13T19:00:36.007001330Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:00:36.008623 containerd[1753]: time="2025-02-13T19:00:36.008291050Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:00:36.008623 containerd[1753]: time="2025-02-13T19:00:36.008383250Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:00:36.008623 containerd[1753]: time="2025-02-13T19:00:36.008393210Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:00:36.012913 containerd[1753]: time="2025-02-13T19:00:36.012736131Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:36.012913 containerd[1753]: time="2025-02-13T19:00:36.012850251Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:36.012913 containerd[1753]: time="2025-02-13T19:00:36.012861971Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:36.014855 containerd[1753]: time="2025-02-13T19:00:36.014805731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:7,}" Feb 13 19:00:36.021301 kubelet[2567]: I0213 19:00:36.021213 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865" Feb 13 19:00:36.022246 containerd[1753]: time="2025-02-13T19:00:36.022104133Z" level=info msg="StopPodSandbox for \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\"" Feb 13 19:00:36.022384 containerd[1753]: time="2025-02-13T19:00:36.022293853Z" level=info msg="Ensure that sandbox ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865 in task-service has been cleanup successfully" Feb 13 19:00:36.023343 containerd[1753]: time="2025-02-13T19:00:36.022923533Z" level=info msg="TearDown network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" successfully" Feb 13 19:00:36.023343 containerd[1753]: time="2025-02-13T19:00:36.022950573Z" level=info msg="StopPodSandbox for \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" returns successfully" Feb 13 19:00:36.023343 containerd[1753]: time="2025-02-13T19:00:36.023283173Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\"" Feb 13 19:00:36.023534 containerd[1753]: time="2025-02-13T19:00:36.023369013Z" level=info msg="TearDown network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" successfully" Feb 13 19:00:36.023534 containerd[1753]: time="2025-02-13T19:00:36.023420613Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" returns successfully" Feb 13 19:00:36.024169 containerd[1753]: time="2025-02-13T19:00:36.023862213Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" Feb 13 19:00:36.024169 containerd[1753]: time="2025-02-13T19:00:36.023949253Z" level=info msg="TearDown network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" successfully" Feb 13 19:00:36.024169 containerd[1753]: time="2025-02-13T19:00:36.023959533Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" returns successfully" Feb 13 19:00:36.024424 containerd[1753]: time="2025-02-13T19:00:36.024249293Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:00:36.024424 containerd[1753]: time="2025-02-13T19:00:36.024340413Z" level=info msg="TearDown network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" successfully" Feb 13 19:00:36.024424 containerd[1753]: time="2025-02-13T19:00:36.024350413Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" returns successfully" Feb 13 19:00:36.024777 containerd[1753]: time="2025-02-13T19:00:36.024650973Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:00:36.024777 containerd[1753]: time="2025-02-13T19:00:36.024763293Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:00:36.024777 containerd[1753]: time="2025-02-13T19:00:36.024774133Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:00:36.025091 containerd[1753]: time="2025-02-13T19:00:36.025064813Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:00:36.025222 containerd[1753]: time="2025-02-13T19:00:36.025150733Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:00:36.025222 containerd[1753]: time="2025-02-13T19:00:36.025167333Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:00:36.025597 containerd[1753]: time="2025-02-13T19:00:36.025478373Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:36.025597 containerd[1753]: time="2025-02-13T19:00:36.025560813Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:36.025597 containerd[1753]: time="2025-02-13T19:00:36.025570493Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:36.026502 containerd[1753]: time="2025-02-13T19:00:36.025821533Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:36.026502 containerd[1753]: time="2025-02-13T19:00:36.025901213Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:36.026502 containerd[1753]: time="2025-02-13T19:00:36.025912973Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:36.026502 containerd[1753]: time="2025-02-13T19:00:36.026238133Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:36.026502 containerd[1753]: time="2025-02-13T19:00:36.026312133Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:36.026502 containerd[1753]: time="2025-02-13T19:00:36.026322533Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:36.026817 containerd[1753]: time="2025-02-13T19:00:36.026781333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:9,}" Feb 13 19:00:36.029056 kubelet[2567]: I0213 19:00:36.028958 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h8mcp" podStartSLOduration=4.35648931 podStartE2EDuration="33.028939814s" podCreationTimestamp="2025-02-13 19:00:03 +0000 UTC" firstStartedPulling="2025-02-13 19:00:07.128581194 +0000 UTC m=+6.067773529" lastFinishedPulling="2025-02-13 19:00:35.801031698 +0000 UTC m=+34.740224033" observedRunningTime="2025-02-13 19:00:36.028666294 +0000 UTC m=+34.967858669" watchObservedRunningTime="2025-02-13 19:00:36.028939814 +0000 UTC m=+34.968132189" Feb 13 19:00:36.153140 containerd[1753]: time="2025-02-13T19:00:36.152907113Z" level=error msg="Failed to destroy network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.153643 containerd[1753]: time="2025-02-13T19:00:36.153514593Z" level=error msg="encountered an error cleaning up failed sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.153643 containerd[1753]: time="2025-02-13T19:00:36.153592993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.155109 kubelet[2567]: E0213 19:00:36.154366 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.155109 kubelet[2567]: E0213 19:00:36.154563 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:36.155109 kubelet[2567]: E0213 19:00:36.154590 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bdh6p" Feb 13 19:00:36.155406 kubelet[2567]: E0213 19:00:36.154651 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bdh6p_default(1691e8e1-fddf-4f28-8c3f-71dc519ee6e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bdh6p" podUID="1691e8e1-fddf-4f28-8c3f-71dc519ee6e4" Feb 13 19:00:36.174673 containerd[1753]: time="2025-02-13T19:00:36.174531436Z" level=error msg="Failed to destroy network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.175215 containerd[1753]: time="2025-02-13T19:00:36.175083196Z" level=error msg="encountered an error cleaning up failed sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.175215 containerd[1753]: time="2025-02-13T19:00:36.175163156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.175913 kubelet[2567]: E0213 19:00:36.175540 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:00:36.175913 kubelet[2567]: E0213 19:00:36.175602 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:36.175913 kubelet[2567]: E0213 19:00:36.175622 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x96qx" Feb 13 19:00:36.176067 kubelet[2567]: E0213 19:00:36.175664 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x96qx_calico-system(4c5666df-4229-496f-8e68-a4354f6b8968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x96qx" podUID="4c5666df-4229-496f-8e68-a4354f6b8968" Feb 13 19:00:36.228794 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:00:36.228958 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:00:36.791555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133688421.mount: Deactivated successfully. Feb 13 19:00:36.791649 systemd[1]: run-netns-cni\x2df840ba72\x2d032a\x2d16c6\x2d36ee\x2dc6ef79ad60cb.mount: Deactivated successfully. Feb 13 19:00:36.791699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd-shm.mount: Deactivated successfully. Feb 13 19:00:36.791751 systemd[1]: run-netns-cni\x2d4a519d81\x2d3c54\x2d8d81\x2d2f9f\x2dee7fcd60fe94.mount: Deactivated successfully. Feb 13 19:00:36.830671 kubelet[2567]: E0213 19:00:36.830593 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:37.028215 kubelet[2567]: I0213 19:00:37.028180 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1" Feb 13 19:00:37.029415 containerd[1753]: time="2025-02-13T19:00:37.029357249Z" level=info msg="StopPodSandbox for \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\"" Feb 13 19:00:37.029821 containerd[1753]: time="2025-02-13T19:00:37.029605529Z" level=info msg="Ensure that sandbox 5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1 in task-service has been cleanup successfully" Feb 13 19:00:37.031461 systemd[1]: run-netns-cni\x2daeeabf9e\x2d487a\x2d3df3\x2d74b4\x2d4013b0eae49d.mount: Deactivated successfully. Feb 13 19:00:37.033646 containerd[1753]: time="2025-02-13T19:00:37.033601210Z" level=info msg="TearDown network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\" successfully" Feb 13 19:00:37.033762 containerd[1753]: time="2025-02-13T19:00:37.033657330Z" level=info msg="StopPodSandbox for \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\" returns successfully" Feb 13 19:00:37.034979 containerd[1753]: time="2025-02-13T19:00:37.034630010Z" level=info msg="StopPodSandbox for \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\"" Feb 13 19:00:37.034979 containerd[1753]: time="2025-02-13T19:00:37.034730010Z" level=info msg="TearDown network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" successfully" Feb 13 19:00:37.034979 containerd[1753]: time="2025-02-13T19:00:37.034741970Z" level=info msg="StopPodSandbox for \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" returns successfully" Feb 13 19:00:37.035360 containerd[1753]: time="2025-02-13T19:00:37.035328650Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\"" Feb 13 19:00:37.035483 containerd[1753]: time="2025-02-13T19:00:37.035418810Z" level=info msg="TearDown network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" successfully" Feb 13 19:00:37.035585 containerd[1753]: time="2025-02-13T19:00:37.035479210Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" returns successfully" Feb 13 19:00:37.035918 containerd[1753]: time="2025-02-13T19:00:37.035847770Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" Feb 13 19:00:37.035970 containerd[1753]: time="2025-02-13T19:00:37.035944210Z" level=info msg="TearDown network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" successfully" Feb 13 19:00:37.035970 containerd[1753]: time="2025-02-13T19:00:37.035956490Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" returns successfully" Feb 13 19:00:37.037005 containerd[1753]: time="2025-02-13T19:00:37.036740650Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:00:37.037005 containerd[1753]: time="2025-02-13T19:00:37.036826370Z" level=info msg="TearDown network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" successfully" Feb 13 19:00:37.037005 containerd[1753]: time="2025-02-13T19:00:37.036836330Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" returns successfully" Feb 13 19:00:37.037654 containerd[1753]: time="2025-02-13T19:00:37.037362690Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:00:37.037654 containerd[1753]: time="2025-02-13T19:00:37.037486570Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:00:37.037654 containerd[1753]: time="2025-02-13T19:00:37.037499450Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:00:37.038233 containerd[1753]: time="2025-02-13T19:00:37.037939130Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:00:37.038233 containerd[1753]: time="2025-02-13T19:00:37.038027210Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:00:37.038233 containerd[1753]: time="2025-02-13T19:00:37.038052370Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:00:37.038875 containerd[1753]: time="2025-02-13T19:00:37.038833330Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:00:37.038963 containerd[1753]: time="2025-02-13T19:00:37.038941571Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:00:37.038963 containerd[1753]: time="2025-02-13T19:00:37.038958251Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:00:37.040154 containerd[1753]: time="2025-02-13T19:00:37.039338331Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:00:37.040154 containerd[1753]: time="2025-02-13T19:00:37.039458291Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:00:37.040154 containerd[1753]: time="2025-02-13T19:00:37.039468851Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:00:37.040327 kubelet[2567]: I0213 19:00:37.039565 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7" Feb 13 19:00:37.040365 containerd[1753]: time="2025-02-13T19:00:37.040316171Z" level=info msg="StopPodSandbox for \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\"" Feb 13 19:00:37.040798 containerd[1753]: time="2025-02-13T19:00:37.040760291Z" level=info msg="Ensure that sandbox 87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7 in task-service has been cleanup successfully" Feb 13 19:00:37.042604 systemd[1]: run-netns-cni\x2db6e3e7e0\x2d3f76\x2d7ab0\x2dd92b\x2d106ab5ed320b.mount: Deactivated successfully. Feb 13 19:00:37.045539 containerd[1753]: time="2025-02-13T19:00:37.045401452Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:00:37.045690 containerd[1753]: time="2025-02-13T19:00:37.045615692Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:00:37.045690 containerd[1753]: time="2025-02-13T19:00:37.045629892Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:00:37.045741 containerd[1753]: time="2025-02-13T19:00:37.045698812Z" level=info msg="TearDown network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\" successfully" Feb 13 19:00:37.045741 containerd[1753]: time="2025-02-13T19:00:37.045709412Z" level=info msg="StopPodSandbox for \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\" returns successfully" Feb 13 19:00:37.049116 containerd[1753]: time="2025-02-13T19:00:37.048510532Z" level=info msg="StopPodSandbox for \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\"" Feb 13 19:00:37.049116 containerd[1753]: time="2025-02-13T19:00:37.048687532Z" level=info msg="TearDown network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" successfully" Feb 13 19:00:37.049116 containerd[1753]: time="2025-02-13T19:00:37.048699332Z" level=info msg="StopPodSandbox for \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" returns successfully" Feb 13 19:00:37.049116 containerd[1753]: time="2025-02-13T19:00:37.048894252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:10,}" Feb 13 19:00:37.050158 containerd[1753]: time="2025-02-13T19:00:37.050046172Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\"" Feb 13 19:00:37.050286 containerd[1753]: time="2025-02-13T19:00:37.050167372Z" level=info msg="TearDown network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" successfully" Feb 13 19:00:37.050286 containerd[1753]: time="2025-02-13T19:00:37.050180212Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" returns successfully" Feb 13 19:00:37.051341 containerd[1753]: time="2025-02-13T19:00:37.051113252Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" Feb 13 19:00:37.051341 containerd[1753]: time="2025-02-13T19:00:37.051221292Z" level=info msg="TearDown network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" successfully" Feb 13 19:00:37.051341 containerd[1753]: time="2025-02-13T19:00:37.051232332Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" returns successfully" Feb 13 19:00:37.052924 containerd[1753]: time="2025-02-13T19:00:37.052869373Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:00:37.053194 containerd[1753]: time="2025-02-13T19:00:37.053055373Z" level=info msg="TearDown network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" successfully" Feb 13 19:00:37.053194 containerd[1753]: time="2025-02-13T19:00:37.053092333Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" returns successfully" Feb 13 19:00:37.053958 containerd[1753]: time="2025-02-13T19:00:37.053834573Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:00:37.053958 containerd[1753]: time="2025-02-13T19:00:37.053944813Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:00:37.053958 containerd[1753]: time="2025-02-13T19:00:37.053956133Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:00:37.055304 containerd[1753]: time="2025-02-13T19:00:37.055139613Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:00:37.055304 containerd[1753]: time="2025-02-13T19:00:37.055253493Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:00:37.055304 containerd[1753]: time="2025-02-13T19:00:37.055263573Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:00:37.056260 containerd[1753]: time="2025-02-13T19:00:37.056223933Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:00:37.056340 containerd[1753]: time="2025-02-13T19:00:37.056325133Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:00:37.056340 containerd[1753]: time="2025-02-13T19:00:37.056336373Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:00:37.057500 containerd[1753]: time="2025-02-13T19:00:37.057373693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:8,}" Feb 13 19:00:37.297216 systemd-networkd[1503]: cali91309570c9b: Link UP Feb 13 19:00:37.298807 systemd-networkd[1503]: cali91309570c9b: Gained carrier Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.147 [INFO][3691] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.166 [INFO][3691] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.27-k8s-csi--node--driver--x96qx-eth0 csi-node-driver- calico-system 4c5666df-4229-496f-8e68-a4354f6b8968 1146 0 2025-02-13 19:00:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.20.27 csi-node-driver-x96qx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali91309570c9b [] []}} ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.167 [INFO][3691] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.203 [INFO][3715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" HandleID="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Workload="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.221 [INFO][3715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" HandleID="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Workload="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000482a00), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.20.27", "pod":"csi-node-driver-x96qx", "timestamp":"2025-02-13 19:00:37.203266836 +0000 UTC"}, Hostname:"10.200.20.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.221 [INFO][3715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.221 [INFO][3715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.221 [INFO][3715] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.27' Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.223 [INFO][3715] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.226 [INFO][3715] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.230 [INFO][3715] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.232 [INFO][3715] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.234 [INFO][3715] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.234 [INFO][3715] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.235 [INFO][3715] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.244 [INFO][3715] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.254 [INFO][3715] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.193/26] block=192.168.76.192/26 handle="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.254 [INFO][3715] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.193/26] handle="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" host="10.200.20.27" Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.254 [INFO][3715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:00:37.312583 containerd[1753]: 2025-02-13 19:00:37.254 [INFO][3715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.193/26] IPv6=[] ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" HandleID="k8s-pod-network.3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Workload="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" Feb 13 19:00:37.313169 containerd[1753]: 2025-02-13 19:00:37.257 [INFO][3691] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-csi--node--driver--x96qx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c5666df-4229-496f-8e68-a4354f6b8968", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"", Pod:"csi-node-driver-x96qx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali91309570c9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:00:37.313169 containerd[1753]: 2025-02-13 19:00:37.257 [INFO][3691] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.193/32] ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" Feb 13 19:00:37.313169 containerd[1753]: 2025-02-13 19:00:37.257 [INFO][3691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91309570c9b ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" Feb 13 19:00:37.313169 containerd[1753]: 2025-02-13 19:00:37.297 [INFO][3691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" Feb 13 19:00:37.313169 containerd[1753]: 2025-02-13 19:00:37.300 [INFO][3691] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-csi--node--driver--x96qx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c5666df-4229-496f-8e68-a4354f6b8968", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc", Pod:"csi-node-driver-x96qx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali91309570c9b", MAC:"8e:02:ac:dc:bd:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:00:37.313169 containerd[1753]: 2025-02-13 19:00:37.310 [INFO][3691] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc" Namespace="calico-system" Pod="csi-node-driver-x96qx" WorkloadEndpoint="10.200.20.27-k8s-csi--node--driver--x96qx-eth0" Feb 13 19:00:37.337752 containerd[1753]: time="2025-02-13T19:00:37.337487857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:37.338836 containerd[1753]: time="2025-02-13T19:00:37.337600777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:37.338942 containerd[1753]: time="2025-02-13T19:00:37.338875377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:37.339149 containerd[1753]: time="2025-02-13T19:00:37.339109777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:37.359671 systemd[1]: Started cri-containerd-3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc.scope - libcontainer container 3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc. Feb 13 19:00:37.368529 systemd-networkd[1503]: calibb61f9b46b1: Link UP Feb 13 19:00:37.368906 systemd-networkd[1503]: calibb61f9b46b1: Gained carrier Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.156 [INFO][3700] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.173 [INFO][3700] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0 nginx-deployment-7fcdb87857- default 1691e8e1-fddf-4f28-8c3f-71dc519ee6e4 1247 0 2025-02-13 19:00:28 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.20.27 nginx-deployment-7fcdb87857-bdh6p eth0 default [] [] [kns.default ksa.default.default] calibb61f9b46b1 [] []}} ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.173 [INFO][3700] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.209 [INFO][3719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" HandleID="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Workload="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.223 [INFO][3719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" HandleID="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Workload="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011d110), Attrs:map[string]string{"namespace":"default", "node":"10.200.20.27", "pod":"nginx-deployment-7fcdb87857-bdh6p", "timestamp":"2025-02-13 19:00:37.209476117 +0000 UTC"}, Hostname:"10.200.20.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.223 [INFO][3719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.254 [INFO][3719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.254 [INFO][3719] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.27' Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.323 [INFO][3719] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.329 [INFO][3719] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.333 [INFO][3719] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.335 [INFO][3719] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.338 [INFO][3719] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.338 [INFO][3719] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.340 [INFO][3719] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.348 [INFO][3719] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.361 [INFO][3719] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.194/26] block=192.168.76.192/26 handle="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.361 [INFO][3719] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.194/26] handle="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" host="10.200.20.27" Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.361 [INFO][3719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:00:37.387196 containerd[1753]: 2025-02-13 19:00:37.361 [INFO][3719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.194/26] IPv6=[] ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" HandleID="k8s-pod-network.e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Workload="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" Feb 13 19:00:37.387977 containerd[1753]: 2025-02-13 19:00:37.365 [INFO][3700] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"1691e8e1-fddf-4f28-8c3f-71dc519ee6e4", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-bdh6p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calibb61f9b46b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:00:37.387977 containerd[1753]: 2025-02-13 19:00:37.365 [INFO][3700] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.194/32] ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" Feb 13 19:00:37.387977 containerd[1753]: 2025-02-13 19:00:37.365 [INFO][3700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb61f9b46b1 ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" Feb 13 19:00:37.387977 containerd[1753]: 2025-02-13 19:00:37.368 [INFO][3700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" Feb 13 19:00:37.387977 containerd[1753]: 2025-02-13 19:00:37.369 [INFO][3700] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"1691e8e1-fddf-4f28-8c3f-71dc519ee6e4", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d", Pod:"nginx-deployment-7fcdb87857-bdh6p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calibb61f9b46b1", MAC:"2e:67:33:50:1e:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:00:37.387977 containerd[1753]: 2025-02-13 19:00:37.385 [INFO][3700] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d" Namespace="default" Pod="nginx-deployment-7fcdb87857-bdh6p" WorkloadEndpoint="10.200.20.27-k8s-nginx--deployment--7fcdb87857--bdh6p-eth0" Feb 13 19:00:37.394074 containerd[1753]: time="2025-02-13T19:00:37.393847026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x96qx,Uid:4c5666df-4229-496f-8e68-a4354f6b8968,Namespace:calico-system,Attempt:10,} returns sandbox id \"3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc\"" Feb 13 19:00:37.396011 containerd[1753]: time="2025-02-13T19:00:37.395879906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:00:37.415680 containerd[1753]: time="2025-02-13T19:00:37.415478589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:37.415968 containerd[1753]: time="2025-02-13T19:00:37.415916109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:37.416297 containerd[1753]: time="2025-02-13T19:00:37.416249829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:37.416579 containerd[1753]: time="2025-02-13T19:00:37.416497229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:37.437667 systemd[1]: Started cri-containerd-e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d.scope - libcontainer container e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d. Feb 13 19:00:37.471039 containerd[1753]: time="2025-02-13T19:00:37.470189597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bdh6p,Uid:1691e8e1-fddf-4f28-8c3f-71dc519ee6e4,Namespace:default,Attempt:8,} returns sandbox id \"e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d\"" Feb 13 19:00:37.756466 kernel: bpftool[3944]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:00:37.831057 kubelet[2567]: E0213 19:00:37.831015 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:38.010217 systemd-networkd[1503]: vxlan.calico: Link UP Feb 13 19:00:38.010230 systemd-networkd[1503]: vxlan.calico: Gained carrier Feb 13 19:00:38.589621 systemd-networkd[1503]: cali91309570c9b: Gained IPv6LL Feb 13 19:00:38.831354 kubelet[2567]: E0213 19:00:38.831298 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:38.909699 systemd-networkd[1503]: calibb61f9b46b1: Gained IPv6LL Feb 13 19:00:39.348407 containerd[1753]: time="2025-02-13T19:00:39.348350729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:39.351018 containerd[1753]: time="2025-02-13T19:00:39.350854930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:00:39.356567 containerd[1753]: time="2025-02-13T19:00:39.356515730Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:39.362367 containerd[1753]: time="2025-02-13T19:00:39.362302211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:39.363061 containerd[1753]: time="2025-02-13T19:00:39.362934851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.967008345s" Feb 13 19:00:39.363061 containerd[1753]: time="2025-02-13T19:00:39.362969891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:00:39.364886 containerd[1753]: time="2025-02-13T19:00:39.364841972Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:00:39.365871 containerd[1753]: time="2025-02-13T19:00:39.365711132Z" level=info msg="CreateContainer within sandbox \"3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:00:39.425363 containerd[1753]: time="2025-02-13T19:00:39.425312061Z" level=info msg="CreateContainer within sandbox \"3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c17fb327451dceb046631013cc585741afb08ac844a9f8147df0e8c639b8cf5f\"" Feb 13 19:00:39.426421 containerd[1753]: time="2025-02-13T19:00:39.426387061Z" level=info msg="StartContainer for \"c17fb327451dceb046631013cc585741afb08ac844a9f8147df0e8c639b8cf5f\"" Feb 13 19:00:39.462689 systemd[1]: Started cri-containerd-c17fb327451dceb046631013cc585741afb08ac844a9f8147df0e8c639b8cf5f.scope - libcontainer container c17fb327451dceb046631013cc585741afb08ac844a9f8147df0e8c639b8cf5f. Feb 13 19:00:39.496021 containerd[1753]: time="2025-02-13T19:00:39.495969632Z" level=info msg="StartContainer for \"c17fb327451dceb046631013cc585741afb08ac844a9f8147df0e8c639b8cf5f\" returns successfully" Feb 13 19:00:39.832076 kubelet[2567]: E0213 19:00:39.832021 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:40.061695 systemd-networkd[1503]: vxlan.calico: Gained IPv6LL Feb 13 19:00:40.832223 kubelet[2567]: E0213 19:00:40.832177 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:41.832532 kubelet[2567]: E0213 19:00:41.832490 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:42.804296 kubelet[2567]: E0213 19:00:42.804245 2567 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:42.833477 kubelet[2567]: E0213 19:00:42.833152 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:43.687205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537976890.mount: Deactivated successfully. Feb 13 19:00:43.833796 kubelet[2567]: E0213 19:00:43.833701 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:44.724930 containerd[1753]: time="2025-02-13T19:00:44.724864562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:44.728478 containerd[1753]: time="2025-02-13T19:00:44.728390239Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:00:44.732919 containerd[1753]: time="2025-02-13T19:00:44.732854034Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:44.740183 containerd[1753]: time="2025-02-13T19:00:44.740114187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:44.741473 containerd[1753]: time="2025-02-13T19:00:44.741142666Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 5.376258054s" Feb 13 19:00:44.741473 containerd[1753]: time="2025-02-13T19:00:44.741371626Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:00:44.747976 containerd[1753]: time="2025-02-13T19:00:44.747925419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:00:44.749493 containerd[1753]: time="2025-02-13T19:00:44.749309138Z" level=info msg="CreateContainer within sandbox \"e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:00:44.807477 containerd[1753]: time="2025-02-13T19:00:44.807379239Z" level=info msg="CreateContainer within sandbox \"e2a9cfc5e71b21cce1278aea5f8e8192860edd0ab3e82e0d80ac8593f396794d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2d00ac1cce6ab4e11410722001407316db092c01e7918e6a18629e960712b485\"" Feb 13 19:00:44.808342 containerd[1753]: time="2025-02-13T19:00:44.808304358Z" level=info msg="StartContainer for \"2d00ac1cce6ab4e11410722001407316db092c01e7918e6a18629e960712b485\"" Feb 13 19:00:44.834582 kubelet[2567]: E0213 19:00:44.834517 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:44.845729 systemd[1]: Started cri-containerd-2d00ac1cce6ab4e11410722001407316db092c01e7918e6a18629e960712b485.scope - libcontainer container 2d00ac1cce6ab4e11410722001407316db092c01e7918e6a18629e960712b485. Feb 13 19:00:44.881891 containerd[1753]: time="2025-02-13T19:00:44.881780284Z" level=info msg="StartContainer for \"2d00ac1cce6ab4e11410722001407316db092c01e7918e6a18629e960712b485\" returns successfully" Feb 13 19:00:45.835454 kubelet[2567]: E0213 19:00:45.835392 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:46.636556 containerd[1753]: time="2025-02-13T19:00:46.636497147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:46.639335 containerd[1753]: time="2025-02-13T19:00:46.639270024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:00:46.643170 containerd[1753]: time="2025-02-13T19:00:46.643082420Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:46.649798 containerd[1753]: time="2025-02-13T19:00:46.649705293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:46.650613 containerd[1753]: time="2025-02-13T19:00:46.650402053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.902426354s" Feb 13 19:00:46.650613 containerd[1753]: time="2025-02-13T19:00:46.650463333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:00:46.653477 containerd[1753]: time="2025-02-13T19:00:46.653282530Z" level=info msg="CreateContainer within sandbox \"3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:00:46.704915 containerd[1753]: time="2025-02-13T19:00:46.704700558Z" level=info msg="CreateContainer within sandbox \"3e6c752cfc6236dda1d7cbb8755f2c51730477be83c3b51ae91a74cb175b77dc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bf5906f2d09c784b8dec942adf52130dd7ef6ac05f1290c747c7d1bd57af07e7\"" Feb 13 19:00:46.705669 containerd[1753]: time="2025-02-13T19:00:46.705631277Z" level=info msg="StartContainer for \"bf5906f2d09c784b8dec942adf52130dd7ef6ac05f1290c747c7d1bd57af07e7\"" Feb 13 19:00:46.737680 systemd[1]: Started cri-containerd-bf5906f2d09c784b8dec942adf52130dd7ef6ac05f1290c747c7d1bd57af07e7.scope - libcontainer container bf5906f2d09c784b8dec942adf52130dd7ef6ac05f1290c747c7d1bd57af07e7. Feb 13 19:00:46.774149 containerd[1753]: time="2025-02-13T19:00:46.774094847Z" level=info msg="StartContainer for \"bf5906f2d09c784b8dec942adf52130dd7ef6ac05f1290c747c7d1bd57af07e7\" returns successfully" Feb 13 19:00:46.836215 kubelet[2567]: E0213 19:00:46.836165 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:46.921782 kubelet[2567]: I0213 19:00:46.921738 2567 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:00:46.921782 kubelet[2567]: I0213 19:00:46.921776 2567 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:00:47.107056 kubelet[2567]: I0213 19:00:47.106934 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-bdh6p" podStartSLOduration=11.830926288 podStartE2EDuration="19.10690939s" podCreationTimestamp="2025-02-13 19:00:28 +0000 UTC" firstStartedPulling="2025-02-13 19:00:37.471245838 +0000 UTC m=+36.410438213" lastFinishedPulling="2025-02-13 19:00:44.74722894 +0000 UTC m=+43.686421315" observedRunningTime="2025-02-13 19:00:45.094025109 +0000 UTC m=+44.033217524" watchObservedRunningTime="2025-02-13 19:00:47.10690939 +0000 UTC m=+46.046101805" Feb 13 19:00:47.107267 kubelet[2567]: I0213 19:00:47.107177 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x96qx" podStartSLOduration=34.851005365 podStartE2EDuration="44.10717143s" podCreationTimestamp="2025-02-13 19:00:03 +0000 UTC" firstStartedPulling="2025-02-13 19:00:37.395511706 +0000 UTC m=+36.334704081" lastFinishedPulling="2025-02-13 19:00:46.651677811 +0000 UTC m=+45.590870146" observedRunningTime="2025-02-13 19:00:47.105765552 +0000 UTC m=+46.044957967" watchObservedRunningTime="2025-02-13 19:00:47.10717143 +0000 UTC m=+46.046363805" Feb 13 19:00:47.837060 kubelet[2567]: E0213 19:00:47.837011 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:48.837473 kubelet[2567]: E0213 19:00:48.837404 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:49.837773 kubelet[2567]: E0213 19:00:49.837717 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:50.670374 systemd[1]: Created slice kubepods-besteffort-pod8f7f8c73_0466_45df_96ec_99c00d8ca3c0.slice - libcontainer container kubepods-besteffort-pod8f7f8c73_0466_45df_96ec_99c00d8ca3c0.slice. Feb 13 19:00:50.725755 kubelet[2567]: I0213 19:00:50.725699 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8f7f8c73-0466-45df-96ec-99c00d8ca3c0-data\") pod \"nfs-server-provisioner-0\" (UID: \"8f7f8c73-0466-45df-96ec-99c00d8ca3c0\") " pod="default/nfs-server-provisioner-0" Feb 13 19:00:50.725755 kubelet[2567]: I0213 19:00:50.725748 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5sm6\" (UniqueName: \"kubernetes.io/projected/8f7f8c73-0466-45df-96ec-99c00d8ca3c0-kube-api-access-l5sm6\") pod \"nfs-server-provisioner-0\" (UID: \"8f7f8c73-0466-45df-96ec-99c00d8ca3c0\") " pod="default/nfs-server-provisioner-0" Feb 13 19:00:50.838993 kubelet[2567]: E0213 19:00:50.838592 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:50.974369 containerd[1753]: time="2025-02-13T19:00:50.974171695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8f7f8c73-0466-45df-96ec-99c00d8ca3c0,Namespace:default,Attempt:0,}" Feb 13 19:00:51.143886 systemd-networkd[1503]: cali60e51b789ff: Link UP Feb 13 19:00:51.144140 systemd-networkd[1503]: cali60e51b789ff: Gained carrier Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.065 [INFO][4203] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.27-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 8f7f8c73-0466-45df-96ec-99c00d8ca3c0 1380 0 2025-02-13 19:00:50 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.20.27 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.066 [INFO][4203] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.093 [INFO][4213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" HandleID="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Workload="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.106 [INFO][4213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" HandleID="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Workload="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003034f0), Attrs:map[string]string{"namespace":"default", "node":"10.200.20.27", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:00:51.093840083 +0000 UTC"}, Hostname:"10.200.20.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.106 [INFO][4213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.106 [INFO][4213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.106 [INFO][4213] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.27' Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.108 [INFO][4213] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.113 [INFO][4213] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.117 [INFO][4213] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.119 [INFO][4213] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.121 [INFO][4213] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.121 [INFO][4213] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.123 [INFO][4213] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2 Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.127 [INFO][4213] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.137 [INFO][4213] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.195/26] block=192.168.76.192/26 handle="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.137 [INFO][4213] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.195/26] handle="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" host="10.200.20.27" Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.137 [INFO][4213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:00:51.164993 containerd[1753]: 2025-02-13 19:00:51.137 [INFO][4213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.195/26] IPv6=[] ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" HandleID="k8s-pod-network.aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Workload="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:00:51.165681 containerd[1753]: 2025-02-13 19:00:51.139 [INFO][4203] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8f7f8c73-0466-45df-96ec-99c00d8ca3c0", ResourceVersion:"1380", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:00:51.165681 containerd[1753]: 2025-02-13 19:00:51.139 [INFO][4203] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.195/32] ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:00:51.165681 containerd[1753]: 2025-02-13 19:00:51.139 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:00:51.165681 containerd[1753]: 2025-02-13 19:00:51.143 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:00:51.165826 containerd[1753]: 2025-02-13 19:00:51.143 [INFO][4203] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8f7f8c73-0466-45df-96ec-99c00d8ca3c0", ResourceVersion:"1380", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"82:9a:73:c3:32:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:00:51.165826 containerd[1753]: 2025-02-13 19:00:51.161 [INFO][4203] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.27-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:00:51.189496 containerd[1753]: time="2025-02-13T19:00:51.189218698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:51.189496 containerd[1753]: time="2025-02-13T19:00:51.189294738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:51.189496 containerd[1753]: time="2025-02-13T19:00:51.189311978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:51.189496 containerd[1753]: time="2025-02-13T19:00:51.189443658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:51.218672 systemd[1]: Started cri-containerd-aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2.scope - libcontainer container aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2. Feb 13 19:00:51.254106 containerd[1753]: time="2025-02-13T19:00:51.253731987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8f7f8c73-0466-45df-96ec-99c00d8ca3c0,Namespace:default,Attempt:0,} returns sandbox id \"aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2\"" Feb 13 19:00:51.259013 containerd[1753]: time="2025-02-13T19:00:51.258750621Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:00:51.838532 systemd[1]: run-containerd-runc-k8s.io-aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2-runc.ZhFNc1.mount: Deactivated successfully. Feb 13 19:00:51.839395 kubelet[2567]: E0213 19:00:51.838958 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:52.349610 systemd-networkd[1503]: cali60e51b789ff: Gained IPv6LL Feb 13 19:00:52.839640 kubelet[2567]: E0213 19:00:52.839593 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:53.791693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466202080.mount: Deactivated successfully. Feb 13 19:00:53.840184 kubelet[2567]: E0213 19:00:53.840109 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:54.840604 kubelet[2567]: E0213 19:00:54.840552 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:55.684476 containerd[1753]: time="2025-02-13T19:00:55.684373899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:55.687785 containerd[1753]: time="2025-02-13T19:00:55.687702535Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 19:00:55.692901 containerd[1753]: time="2025-02-13T19:00:55.692831329Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:55.700292 containerd[1753]: time="2025-02-13T19:00:55.700247721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:55.702005 containerd[1753]: time="2025-02-13T19:00:55.701242160Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.442441899s" Feb 13 19:00:55.702005 containerd[1753]: time="2025-02-13T19:00:55.701288440Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:00:55.703888 containerd[1753]: time="2025-02-13T19:00:55.703831397Z" level=info msg="CreateContainer within sandbox \"aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:00:55.751339 containerd[1753]: time="2025-02-13T19:00:55.751287065Z" level=info msg="CreateContainer within sandbox \"aee9a50101feae2312f8a0fa5831b5d99bd6fddaa4eb3ba544d76df2ebb7a9a2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9314c47430d08fd11d443e12b86c0de9983ae89cc8188fcbd7ae44482b4c4318\"" Feb 13 19:00:55.752077 containerd[1753]: time="2025-02-13T19:00:55.752031624Z" level=info msg="StartContainer for \"9314c47430d08fd11d443e12b86c0de9983ae89cc8188fcbd7ae44482b4c4318\"" Feb 13 19:00:55.792669 systemd[1]: Started cri-containerd-9314c47430d08fd11d443e12b86c0de9983ae89cc8188fcbd7ae44482b4c4318.scope - libcontainer container 9314c47430d08fd11d443e12b86c0de9983ae89cc8188fcbd7ae44482b4c4318. Feb 13 19:00:55.822844 containerd[1753]: time="2025-02-13T19:00:55.822791786Z" level=info msg="StartContainer for \"9314c47430d08fd11d443e12b86c0de9983ae89cc8188fcbd7ae44482b4c4318\" returns successfully" Feb 13 19:00:55.841383 kubelet[2567]: E0213 19:00:55.841334 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:56.842037 kubelet[2567]: E0213 19:00:56.841984 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:57.843012 kubelet[2567]: E0213 19:00:57.842950 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:58.843489 kubelet[2567]: E0213 19:00:58.843427 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:00:59.844123 kubelet[2567]: E0213 19:00:59.844069 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:00.844731 kubelet[2567]: E0213 19:01:00.844620 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:01.845275 kubelet[2567]: E0213 19:01:01.845208 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:02.803998 kubelet[2567]: E0213 19:01:02.803953 2567 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:02.821680 containerd[1753]: time="2025-02-13T19:01:02.821637177Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:01:02.822060 containerd[1753]: time="2025-02-13T19:01:02.821784777Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:01:02.822060 containerd[1753]: time="2025-02-13T19:01:02.821796857Z" level=info msg="StopPodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:01:02.823099 containerd[1753]: time="2025-02-13T19:01:02.823053616Z" level=info msg="RemovePodSandbox for \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:01:02.823099 containerd[1753]: time="2025-02-13T19:01:02.823098496Z" level=info msg="Forcibly stopping sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\"" Feb 13 19:01:02.823261 containerd[1753]: time="2025-02-13T19:01:02.823174136Z" level=info msg="TearDown network for sandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" successfully" Feb 13 19:01:02.836844 containerd[1753]: time="2025-02-13T19:01:02.836780161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.837004 containerd[1753]: time="2025-02-13T19:01:02.836865001Z" level=info msg="RemovePodSandbox \"18a3b8d258b6d80a9d27cdfd6573544409e831d8885ac00ad9e0ebc45a6fe640\" returns successfully" Feb 13 19:01:02.837874 containerd[1753]: time="2025-02-13T19:01:02.837692880Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:01:02.837874 containerd[1753]: time="2025-02-13T19:01:02.837806440Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:01:02.837874 containerd[1753]: time="2025-02-13T19:01:02.837822560Z" level=info msg="StopPodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:01:02.838445 containerd[1753]: time="2025-02-13T19:01:02.838414279Z" level=info msg="RemovePodSandbox for \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:01:02.838575 containerd[1753]: time="2025-02-13T19:01:02.838464239Z" level=info msg="Forcibly stopping sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\"" Feb 13 19:01:02.838575 containerd[1753]: time="2025-02-13T19:01:02.838525799Z" level=info msg="TearDown network for sandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" successfully" Feb 13 19:01:02.845979 kubelet[2567]: E0213 19:01:02.845919 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:02.847960 containerd[1753]: time="2025-02-13T19:01:02.847886429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.847960 containerd[1753]: time="2025-02-13T19:01:02.847961469Z" level=info msg="RemovePodSandbox \"b1cc0f01808feaa57f6ad31e3ed2b0351b7576b833c7a0a55a4bbaf9866d0797\" returns successfully" Feb 13 19:01:02.848755 containerd[1753]: time="2025-02-13T19:01:02.848535949Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:01:02.848755 containerd[1753]: time="2025-02-13T19:01:02.848643388Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:01:02.848755 containerd[1753]: time="2025-02-13T19:01:02.848653868Z" level=info msg="StopPodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:01:02.849172 containerd[1753]: time="2025-02-13T19:01:02.849140548Z" level=info msg="RemovePodSandbox for \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:01:02.849220 containerd[1753]: time="2025-02-13T19:01:02.849176988Z" level=info msg="Forcibly stopping sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\"" Feb 13 19:01:02.849286 containerd[1753]: time="2025-02-13T19:01:02.849253908Z" level=info msg="TearDown network for sandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" successfully" Feb 13 19:01:02.859206 containerd[1753]: time="2025-02-13T19:01:02.859150697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.859368 containerd[1753]: time="2025-02-13T19:01:02.859225897Z" level=info msg="RemovePodSandbox \"3aeda726a894b59f4502f91c475b86d452528bd1f1ec3a9e7bc5b87c5e1a54e8\" returns successfully" Feb 13 19:01:02.859980 containerd[1753]: time="2025-02-13T19:01:02.859820256Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:01:02.859980 containerd[1753]: time="2025-02-13T19:01:02.859925336Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:01:02.859980 containerd[1753]: time="2025-02-13T19:01:02.859935656Z" level=info msg="StopPodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:01:02.860328 containerd[1753]: time="2025-02-13T19:01:02.860307856Z" level=info msg="RemovePodSandbox for \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:01:02.860355 containerd[1753]: time="2025-02-13T19:01:02.860333696Z" level=info msg="Forcibly stopping sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\"" Feb 13 19:01:02.860539 containerd[1753]: time="2025-02-13T19:01:02.860390576Z" level=info msg="TearDown network for sandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" successfully" Feb 13 19:01:02.869407 containerd[1753]: time="2025-02-13T19:01:02.869329526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.869407 containerd[1753]: time="2025-02-13T19:01:02.869402806Z" level=info msg="RemovePodSandbox \"b6a1bbc32bb2c7a5eb3cfd1fd5e4ba3bb109631e49d185060bacef84584a71b3\" returns successfully" Feb 13 19:01:02.869917 containerd[1753]: time="2025-02-13T19:01:02.869889486Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:01:02.870220 containerd[1753]: time="2025-02-13T19:01:02.870129525Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:01:02.870220 containerd[1753]: time="2025-02-13T19:01:02.870146485Z" level=info msg="StopPodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:01:02.870577 containerd[1753]: time="2025-02-13T19:01:02.870424845Z" level=info msg="RemovePodSandbox for \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:01:02.870577 containerd[1753]: time="2025-02-13T19:01:02.870467365Z" level=info msg="Forcibly stopping sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\"" Feb 13 19:01:02.870577 containerd[1753]: time="2025-02-13T19:01:02.870544085Z" level=info msg="TearDown network for sandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" successfully" Feb 13 19:01:02.880571 containerd[1753]: time="2025-02-13T19:01:02.880507954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.880571 containerd[1753]: time="2025-02-13T19:01:02.880575634Z" level=info msg="RemovePodSandbox \"282141faf5fefe5ec025b5b3ea20174cdcca864bcaa2889583f9e1e9a354b6dd\" returns successfully" Feb 13 19:01:02.881415 containerd[1753]: time="2025-02-13T19:01:02.881214474Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:01:02.881415 containerd[1753]: time="2025-02-13T19:01:02.881337913Z" level=info msg="TearDown network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" successfully" Feb 13 19:01:02.881415 containerd[1753]: time="2025-02-13T19:01:02.881347913Z" level=info msg="StopPodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" returns successfully" Feb 13 19:01:02.882210 containerd[1753]: time="2025-02-13T19:01:02.881734633Z" level=info msg="RemovePodSandbox for \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:01:02.882210 containerd[1753]: time="2025-02-13T19:01:02.881764473Z" level=info msg="Forcibly stopping sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\"" Feb 13 19:01:02.882210 containerd[1753]: time="2025-02-13T19:01:02.881836433Z" level=info msg="TearDown network for sandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" successfully" Feb 13 19:01:02.891423 containerd[1753]: time="2025-02-13T19:01:02.891366023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.891591 containerd[1753]: time="2025-02-13T19:01:02.891447943Z" level=info msg="RemovePodSandbox \"b1368d3188ef1378ee087f0df43efd90a98775b81648caad7a06d25fabc2634e\" returns successfully" Feb 13 19:01:02.892819 containerd[1753]: time="2025-02-13T19:01:02.892055662Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" Feb 13 19:01:02.892819 containerd[1753]: time="2025-02-13T19:01:02.892161702Z" level=info msg="TearDown network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" successfully" Feb 13 19:01:02.892819 containerd[1753]: time="2025-02-13T19:01:02.892172382Z" level=info msg="StopPodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" returns successfully" Feb 13 19:01:02.892819 containerd[1753]: time="2025-02-13T19:01:02.892496342Z" level=info msg="RemovePodSandbox for \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" Feb 13 19:01:02.892819 containerd[1753]: time="2025-02-13T19:01:02.892519542Z" level=info msg="Forcibly stopping sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\"" Feb 13 19:01:02.892819 containerd[1753]: time="2025-02-13T19:01:02.892570981Z" level=info msg="TearDown network for sandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" successfully" Feb 13 19:01:02.904387 containerd[1753]: time="2025-02-13T19:01:02.904329289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.904544 containerd[1753]: time="2025-02-13T19:01:02.904407809Z" level=info msg="RemovePodSandbox \"37909fb551dfefc83b862f254596fb3f6bb8010c2c63a12b1c6b7da48d611531\" returns successfully" Feb 13 19:01:02.905477 containerd[1753]: time="2025-02-13T19:01:02.904929968Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\"" Feb 13 19:01:02.905477 containerd[1753]: time="2025-02-13T19:01:02.905056928Z" level=info msg="TearDown network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" successfully" Feb 13 19:01:02.905477 containerd[1753]: time="2025-02-13T19:01:02.905068408Z" level=info msg="StopPodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" returns successfully" Feb 13 19:01:02.905477 containerd[1753]: time="2025-02-13T19:01:02.905336728Z" level=info msg="RemovePodSandbox for \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\"" Feb 13 19:01:02.905477 containerd[1753]: time="2025-02-13T19:01:02.905355448Z" level=info msg="Forcibly stopping sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\"" Feb 13 19:01:02.905477 containerd[1753]: time="2025-02-13T19:01:02.905410448Z" level=info msg="TearDown network for sandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" successfully" Feb 13 19:01:02.944759 containerd[1753]: time="2025-02-13T19:01:02.944501926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.944759 containerd[1753]: time="2025-02-13T19:01:02.944568726Z" level=info msg="RemovePodSandbox \"efb6119bc6e783c7538d1bf1293ae78dd4295a6ef6d7324f7168e7ab260fe64f\" returns successfully" Feb 13 19:01:02.945295 containerd[1753]: time="2025-02-13T19:01:02.945103165Z" level=info msg="StopPodSandbox for \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\"" Feb 13 19:01:02.945295 containerd[1753]: time="2025-02-13T19:01:02.945216245Z" level=info msg="TearDown network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" successfully" Feb 13 19:01:02.945295 containerd[1753]: time="2025-02-13T19:01:02.945227685Z" level=info msg="StopPodSandbox for \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" returns successfully" Feb 13 19:01:02.945764 containerd[1753]: time="2025-02-13T19:01:02.945732525Z" level=info msg="RemovePodSandbox for \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\"" Feb 13 19:01:02.945814 containerd[1753]: time="2025-02-13T19:01:02.945783125Z" level=info msg="Forcibly stopping sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\"" Feb 13 19:01:02.945876 containerd[1753]: time="2025-02-13T19:01:02.945855845Z" level=info msg="TearDown network for sandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" successfully" Feb 13 19:01:02.955136 containerd[1753]: time="2025-02-13T19:01:02.955079555Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.955259 containerd[1753]: time="2025-02-13T19:01:02.955151275Z" level=info msg="RemovePodSandbox \"ac205b859d478dcd9e8f0f62ca960b447199f09b46847a8e701eac06e5c5e865\" returns successfully" Feb 13 19:01:02.955942 containerd[1753]: time="2025-02-13T19:01:02.955769594Z" level=info msg="StopPodSandbox for \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\"" Feb 13 19:01:02.955942 containerd[1753]: time="2025-02-13T19:01:02.955883714Z" level=info msg="TearDown network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\" successfully" Feb 13 19:01:02.955942 containerd[1753]: time="2025-02-13T19:01:02.955894594Z" level=info msg="StopPodSandbox for \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\" returns successfully" Feb 13 19:01:02.956483 containerd[1753]: time="2025-02-13T19:01:02.956413953Z" level=info msg="RemovePodSandbox for \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\"" Feb 13 19:01:02.956483 containerd[1753]: time="2025-02-13T19:01:02.956468233Z" level=info msg="Forcibly stopping sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\"" Feb 13 19:01:02.956564 containerd[1753]: time="2025-02-13T19:01:02.956534633Z" level=info msg="TearDown network for sandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\" successfully" Feb 13 19:01:02.964426 containerd[1753]: time="2025-02-13T19:01:02.964268905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.964426 containerd[1753]: time="2025-02-13T19:01:02.964386065Z" level=info msg="RemovePodSandbox \"5c00634d4a0f4bdd9e84ba74dfb3ebd2b951912f9a299f130fcc0e45622b91d1\" returns successfully" Feb 13 19:01:02.965782 containerd[1753]: time="2025-02-13T19:01:02.965751703Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:01:02.966738 containerd[1753]: time="2025-02-13T19:01:02.966589262Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:01:02.966738 containerd[1753]: time="2025-02-13T19:01:02.966611222Z" level=info msg="StopPodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:01:02.968527 containerd[1753]: time="2025-02-13T19:01:02.968118901Z" level=info msg="RemovePodSandbox for \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:01:02.968527 containerd[1753]: time="2025-02-13T19:01:02.968155941Z" level=info msg="Forcibly stopping sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\"" Feb 13 19:01:02.968527 containerd[1753]: time="2025-02-13T19:01:02.968247781Z" level=info msg="TearDown network for sandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" successfully" Feb 13 19:01:02.981112 containerd[1753]: time="2025-02-13T19:01:02.981060167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.981455 containerd[1753]: time="2025-02-13T19:01:02.981333567Z" level=info msg="RemovePodSandbox \"4c89ede5fcd23afdaa353e5ace1ccbc9564019470218fbf5c1d404fec5eb8680\" returns successfully" Feb 13 19:01:02.981900 containerd[1753]: time="2025-02-13T19:01:02.981876926Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:01:02.982129 containerd[1753]: time="2025-02-13T19:01:02.982051846Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:01:02.982129 containerd[1753]: time="2025-02-13T19:01:02.982067646Z" level=info msg="StopPodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:01:02.983417 containerd[1753]: time="2025-02-13T19:01:02.982586725Z" level=info msg="RemovePodSandbox for \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:01:02.983417 containerd[1753]: time="2025-02-13T19:01:02.982613045Z" level=info msg="Forcibly stopping sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\"" Feb 13 19:01:02.983417 containerd[1753]: time="2025-02-13T19:01:02.982683205Z" level=info msg="TearDown network for sandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" successfully" Feb 13 19:01:02.995268 containerd[1753]: time="2025-02-13T19:01:02.995222232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:02.995525 containerd[1753]: time="2025-02-13T19:01:02.995505111Z" level=info msg="RemovePodSandbox \"9977fbb78f3821fd3ff78cfff5860f44d0b56cc78de92679db070d6643caeb5a\" returns successfully" Feb 13 19:01:02.996073 containerd[1753]: time="2025-02-13T19:01:02.996045631Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:01:02.996272 containerd[1753]: time="2025-02-13T19:01:02.996252711Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:01:02.996331 containerd[1753]: time="2025-02-13T19:01:02.996319231Z" level=info msg="StopPodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:01:02.996807 containerd[1753]: time="2025-02-13T19:01:02.996786750Z" level=info msg="RemovePodSandbox for \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:01:02.996994 containerd[1753]: time="2025-02-13T19:01:02.996976990Z" level=info msg="Forcibly stopping sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\"" Feb 13 19:01:02.997136 containerd[1753]: time="2025-02-13T19:01:02.997119750Z" level=info msg="TearDown network for sandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" successfully" Feb 13 19:01:03.005296 containerd[1753]: time="2025-02-13T19:01:03.005235701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:03.005726 containerd[1753]: time="2025-02-13T19:01:03.005522181Z" level=info msg="RemovePodSandbox \"e27bdd97a258b9cb9b1908b8aafbf81b1eabffb7ffb75f3d264ed76dd1b9219e\" returns successfully" Feb 13 19:01:03.006034 containerd[1753]: time="2025-02-13T19:01:03.006001620Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:01:03.006139 containerd[1753]: time="2025-02-13T19:01:03.006113260Z" level=info msg="TearDown network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" successfully" Feb 13 19:01:03.006169 containerd[1753]: time="2025-02-13T19:01:03.006136660Z" level=info msg="StopPodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" returns successfully" Feb 13 19:01:03.006532 containerd[1753]: time="2025-02-13T19:01:03.006505580Z" level=info msg="RemovePodSandbox for \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:01:03.006658 containerd[1753]: time="2025-02-13T19:01:03.006533700Z" level=info msg="Forcibly stopping sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\"" Feb 13 19:01:03.006658 containerd[1753]: time="2025-02-13T19:01:03.006601220Z" level=info msg="TearDown network for sandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" successfully" Feb 13 19:01:03.015625 containerd[1753]: time="2025-02-13T19:01:03.015566770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:03.015625 containerd[1753]: time="2025-02-13T19:01:03.015639250Z" level=info msg="RemovePodSandbox \"5ca1b182f7cae8ad5b92be8370dcc87a1d818565142c3439988f9a90bc17a786\" returns successfully" Feb 13 19:01:03.016209 containerd[1753]: time="2025-02-13T19:01:03.016174329Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" Feb 13 19:01:03.016311 containerd[1753]: time="2025-02-13T19:01:03.016288089Z" level=info msg="TearDown network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" successfully" Feb 13 19:01:03.016311 containerd[1753]: time="2025-02-13T19:01:03.016304929Z" level=info msg="StopPodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" returns successfully" Feb 13 19:01:03.017157 containerd[1753]: time="2025-02-13T19:01:03.016676289Z" level=info msg="RemovePodSandbox for \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" Feb 13 19:01:03.017157 containerd[1753]: time="2025-02-13T19:01:03.016709169Z" level=info msg="Forcibly stopping sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\"" Feb 13 19:01:03.017157 containerd[1753]: time="2025-02-13T19:01:03.016774129Z" level=info msg="TearDown network for sandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" successfully" Feb 13 19:01:03.029598 containerd[1753]: time="2025-02-13T19:01:03.029533675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:03.029598 containerd[1753]: time="2025-02-13T19:01:03.029599635Z" level=info msg="RemovePodSandbox \"4b35fcfa4bdd99e39d49bf96e6f6888e9d4e0f469dd9fb115fba737b7992b395\" returns successfully" Feb 13 19:01:03.030484 containerd[1753]: time="2025-02-13T19:01:03.030250674Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\"" Feb 13 19:01:03.030484 containerd[1753]: time="2025-02-13T19:01:03.030352754Z" level=info msg="TearDown network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" successfully" Feb 13 19:01:03.030484 containerd[1753]: time="2025-02-13T19:01:03.030362514Z" level=info msg="StopPodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" returns successfully" Feb 13 19:01:03.030853 containerd[1753]: time="2025-02-13T19:01:03.030824914Z" level=info msg="RemovePodSandbox for \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\"" Feb 13 19:01:03.030894 containerd[1753]: time="2025-02-13T19:01:03.030858154Z" level=info msg="Forcibly stopping sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\"" Feb 13 19:01:03.030951 containerd[1753]: time="2025-02-13T19:01:03.030929994Z" level=info msg="TearDown network for sandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" successfully" Feb 13 19:01:03.043338 containerd[1753]: time="2025-02-13T19:01:03.043290700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:03.043478 containerd[1753]: time="2025-02-13T19:01:03.043359060Z" level=info msg="RemovePodSandbox \"2819f8eb6d329b11862026f5788ecdc04fb1c81b23c3007a90d828c411bbf894\" returns successfully" Feb 13 19:01:03.043894 containerd[1753]: time="2025-02-13T19:01:03.043863180Z" level=info msg="StopPodSandbox for \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\"" Feb 13 19:01:03.044001 containerd[1753]: time="2025-02-13T19:01:03.043977460Z" level=info msg="TearDown network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" successfully" Feb 13 19:01:03.044001 containerd[1753]: time="2025-02-13T19:01:03.043997100Z" level=info msg="StopPodSandbox for \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" returns successfully" Feb 13 19:01:03.045270 containerd[1753]: time="2025-02-13T19:01:03.044379699Z" level=info msg="RemovePodSandbox for \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\"" Feb 13 19:01:03.045270 containerd[1753]: time="2025-02-13T19:01:03.044408579Z" level=info msg="Forcibly stopping sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\"" Feb 13 19:01:03.045270 containerd[1753]: time="2025-02-13T19:01:03.044499419Z" level=info msg="TearDown network for sandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" successfully" Feb 13 19:01:03.053681 containerd[1753]: time="2025-02-13T19:01:03.053632769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:03.053870 containerd[1753]: time="2025-02-13T19:01:03.053852249Z" level=info msg="RemovePodSandbox \"9e3dcf9e6a2a977db6ceae9e90daef296fe9994af433e432c57877b5436285bd\" returns successfully" Feb 13 19:01:03.054514 containerd[1753]: time="2025-02-13T19:01:03.054393929Z" level=info msg="StopPodSandbox for \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\"" Feb 13 19:01:03.055623 containerd[1753]: time="2025-02-13T19:01:03.055587607Z" level=info msg="TearDown network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\" successfully" Feb 13 19:01:03.055623 containerd[1753]: time="2025-02-13T19:01:03.055614487Z" level=info msg="StopPodSandbox for \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\" returns successfully" Feb 13 19:01:03.056055 containerd[1753]: time="2025-02-13T19:01:03.056016247Z" level=info msg="RemovePodSandbox for \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\"" Feb 13 19:01:03.056055 containerd[1753]: time="2025-02-13T19:01:03.056054407Z" level=info msg="Forcibly stopping sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\"" Feb 13 19:01:03.056148 containerd[1753]: time="2025-02-13T19:01:03.056123727Z" level=info msg="TearDown network for sandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\" successfully" Feb 13 19:01:03.067038 containerd[1753]: time="2025-02-13T19:01:03.066979555Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:01:03.067172 containerd[1753]: time="2025-02-13T19:01:03.067053835Z" level=info msg="RemovePodSandbox \"87936f0862e35b9c1137757006cfc9495a1f79cc1f6af2518658100b57f57dd7\" returns successfully" Feb 13 19:01:03.846306 kubelet[2567]: E0213 19:01:03.846241 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:04.846472 kubelet[2567]: E0213 19:01:04.846406 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:05.846619 kubelet[2567]: E0213 19:01:05.846539 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:06.846837 kubelet[2567]: E0213 19:01:06.846790 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:07.121022 kubelet[2567]: I0213 19:01:07.120773 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=12.674503169 podStartE2EDuration="17.120755783s" podCreationTimestamp="2025-02-13 19:00:50 +0000 UTC" firstStartedPulling="2025-02-13 19:00:51.255867825 +0000 UTC m=+50.195060200" lastFinishedPulling="2025-02-13 19:00:55.702120479 +0000 UTC m=+54.641312814" observedRunningTime="2025-02-13 19:00:56.130733446 +0000 UTC m=+55.069925821" watchObservedRunningTime="2025-02-13 19:01:07.120755783 +0000 UTC m=+66.059948158" Feb 13 19:01:07.847066 kubelet[2567]: E0213 19:01:07.847018 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:08.847924 kubelet[2567]: E0213 19:01:08.847868 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:09.848268 kubelet[2567]: E0213 19:01:09.848218 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:10.848383 kubelet[2567]: E0213 19:01:10.848331 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:11.849240 kubelet[2567]: E0213 19:01:11.849140 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:12.849563 kubelet[2567]: E0213 19:01:12.849516 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:13.849814 kubelet[2567]: E0213 19:01:13.849757 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:14.850184 kubelet[2567]: E0213 19:01:14.850140 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:15.851003 kubelet[2567]: E0213 19:01:15.850958 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:16.851933 kubelet[2567]: E0213 19:01:16.851882 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:17.852513 kubelet[2567]: E0213 19:01:17.852453 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:18.853113 kubelet[2567]: E0213 19:01:18.853053 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:19.854086 kubelet[2567]: E0213 19:01:19.854034 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:20.288729 systemd[1]: Created slice kubepods-besteffort-pode3e9d3d9_91cd_40ed_9d9d_f62b15f621ea.slice - libcontainer container kubepods-besteffort-pode3e9d3d9_91cd_40ed_9d9d_f62b15f621ea.slice. Feb 13 19:01:20.390715 kubelet[2567]: I0213 19:01:20.390668 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0acfe199-8958-434c-b9d6-db6902abb11b\" (UniqueName: \"kubernetes.io/nfs/e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea-pvc-0acfe199-8958-434c-b9d6-db6902abb11b\") pod \"test-pod-1\" (UID: \"e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea\") " pod="default/test-pod-1" Feb 13 19:01:20.390715 kubelet[2567]: I0213 19:01:20.390715 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7t9x\" (UniqueName: \"kubernetes.io/projected/e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea-kube-api-access-s7t9x\") pod \"test-pod-1\" (UID: \"e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea\") " pod="default/test-pod-1" Feb 13 19:01:20.613583 kernel: FS-Cache: Loaded Feb 13 19:01:20.687247 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:01:20.687380 kernel: RPC: Registered udp transport module. Feb 13 19:01:20.687409 kernel: RPC: Registered tcp transport module. Feb 13 19:01:20.695796 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:01:20.695997 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:01:20.854343 kubelet[2567]: E0213 19:01:20.854281 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:21.023789 kernel: NFS: Registering the id_resolver key type Feb 13 19:01:21.023921 kernel: Key type id_resolver registered Feb 13 19:01:21.023946 kernel: Key type id_legacy registered Feb 13 19:01:21.229242 nfsidmap[4432]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-a-c0811b896b' Feb 13 19:01:21.331311 nfsidmap[4435]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-a-c0811b896b' Feb 13 19:01:21.494114 containerd[1753]: time="2025-02-13T19:01:21.493508372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea,Namespace:default,Attempt:0,}" Feb 13 19:01:21.745634 systemd-networkd[1503]: cali5ec59c6bf6e: Link UP Feb 13 19:01:21.745885 systemd-networkd[1503]: cali5ec59c6bf6e: Gained carrier Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.570 [INFO][4436] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.27-k8s-test--pod--1-eth0 default e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea 1478 0 2025-02-13 19:00:51 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.20.27 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.570 [INFO][4436] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-eth0" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.597 [INFO][4447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" HandleID="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Workload="10.200.20.27-k8s-test--pod--1-eth0" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.711 [INFO][4447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" HandleID="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Workload="10.200.20.27-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000221520), Attrs:map[string]string{"namespace":"default", "node":"10.200.20.27", "pod":"test-pod-1", "timestamp":"2025-02-13 19:01:21.597391308 +0000 UTC"}, Hostname:"10.200.20.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.711 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.712 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.712 [INFO][4447] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.27' Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.714 [INFO][4447] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.717 [INFO][4447] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.721 [INFO][4447] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.722 [INFO][4447] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.724 [INFO][4447] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.725 [INFO][4447] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.726 [INFO][4447] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.731 [INFO][4447] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.740 [INFO][4447] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.196/26] block=192.168.76.192/26 handle="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.740 [INFO][4447] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.196/26] handle="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" host="10.200.20.27" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.740 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.740 [INFO][4447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.196/26] IPv6=[] ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" HandleID="k8s-pod-network.b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Workload="10.200.20.27-k8s-test--pod--1-eth0" Feb 13 19:01:21.761926 containerd[1753]: 2025-02-13 19:01:21.742 [INFO][4436] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea", ResourceVersion:"1478", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:01:21.762535 containerd[1753]: 2025-02-13 19:01:21.742 [INFO][4436] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.196/32] ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-eth0" Feb 13 19:01:21.762535 containerd[1753]: 2025-02-13 19:01:21.742 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-eth0" Feb 13 19:01:21.762535 containerd[1753]: 2025-02-13 19:01:21.746 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-eth0" Feb 13 19:01:21.762535 containerd[1753]: 2025-02-13 19:01:21.748 [INFO][4436] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.27-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea", ResourceVersion:"1478", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.27", ContainerID:"b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"c6:b9:4b:00:87:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:01:21.762535 containerd[1753]: 2025-02-13 19:01:21.758 [INFO][4436] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.27-k8s-test--pod--1-eth0" Feb 13 19:01:21.784721 containerd[1753]: time="2025-02-13T19:01:21.784578601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:21.784721 containerd[1753]: time="2025-02-13T19:01:21.784654121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:21.784721 containerd[1753]: time="2025-02-13T19:01:21.784681041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:21.785012 containerd[1753]: time="2025-02-13T19:01:21.784764561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:21.817675 systemd[1]: Started cri-containerd-b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe.scope - libcontainer container b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe. Feb 13 19:01:21.852872 containerd[1753]: time="2025-02-13T19:01:21.852818892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e3e9d3d9-91cd-40ed-9d9d-f62b15f621ea,Namespace:default,Attempt:0,} returns sandbox id \"b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe\"" Feb 13 19:01:21.854707 kubelet[2567]: E0213 19:01:21.854615 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:21.855337 containerd[1753]: time="2025-02-13T19:01:21.854968810Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:01:22.340500 containerd[1753]: time="2025-02-13T19:01:22.340028765Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:22.343809 containerd[1753]: time="2025-02-13T19:01:22.343742322Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:01:22.346807 containerd[1753]: time="2025-02-13T19:01:22.346756319Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 491.737829ms" Feb 13 19:01:22.346807 containerd[1753]: time="2025-02-13T19:01:22.346802838Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:01:22.349358 containerd[1753]: time="2025-02-13T19:01:22.349305116Z" level=info msg="CreateContainer within sandbox \"b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:01:22.378276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1893387695.mount: Deactivated successfully. Feb 13 19:01:22.390445 containerd[1753]: time="2025-02-13T19:01:22.390327395Z" level=info msg="CreateContainer within sandbox \"b6ed832b86e81eba9d326fae45758d60207ce293744c5f1e01b9216fb5f1c8fe\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"fcc53cbc089cacd87cb2aaa33d95d102e17b3aef45d6e8859eb83961fca68255\"" Feb 13 19:01:22.391076 containerd[1753]: time="2025-02-13T19:01:22.390967594Z" level=info msg="StartContainer for \"fcc53cbc089cacd87cb2aaa33d95d102e17b3aef45d6e8859eb83961fca68255\"" Feb 13 19:01:22.417696 systemd[1]: Started cri-containerd-fcc53cbc089cacd87cb2aaa33d95d102e17b3aef45d6e8859eb83961fca68255.scope - libcontainer container fcc53cbc089cacd87cb2aaa33d95d102e17b3aef45d6e8859eb83961fca68255. Feb 13 19:01:22.450409 containerd[1753]: time="2025-02-13T19:01:22.450365615Z" level=info msg="StartContainer for \"fcc53cbc089cacd87cb2aaa33d95d102e17b3aef45d6e8859eb83961fca68255\" returns successfully" Feb 13 19:01:22.804188 kubelet[2567]: E0213 19:01:22.804131 2567 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:22.855635 kubelet[2567]: E0213 19:01:22.855593 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:23.185317 kubelet[2567]: I0213 19:01:23.185165 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=31.691472813 podStartE2EDuration="32.18514488s" podCreationTimestamp="2025-02-13 19:00:51 +0000 UTC" firstStartedPulling="2025-02-13 19:01:21.853981171 +0000 UTC m=+80.793173506" lastFinishedPulling="2025-02-13 19:01:22.347653198 +0000 UTC m=+81.286845573" observedRunningTime="2025-02-13 19:01:23.184776881 +0000 UTC m=+82.123969296" watchObservedRunningTime="2025-02-13 19:01:23.18514488 +0000 UTC m=+82.124337255" Feb 13 19:01:23.197616 systemd-networkd[1503]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:01:23.855926 kubelet[2567]: E0213 19:01:23.855869 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:24.856736 kubelet[2567]: E0213 19:01:24.856689 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:25.857178 kubelet[2567]: E0213 19:01:25.857113 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:26.858303 kubelet[2567]: E0213 19:01:26.858244 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:27.858859 kubelet[2567]: E0213 19:01:27.858799 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:28.859799 kubelet[2567]: E0213 19:01:28.859748 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:29.860428 kubelet[2567]: E0213 19:01:29.860365 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:01:30.860707 kubelet[2567]: E0213 19:01:30.860653 2567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"