May 9 23:57:38.437251 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:57:38.437274 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:57:38.437283 kernel: KASLR enabled May 9 23:57:38.437288 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 9 23:57:38.437296 kernel: printk: bootconsole [pl11] enabled May 9 23:57:38.437301 kernel: efi: EFI v2.7 by EDK II May 9 23:57:38.437308 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 May 9 23:57:38.437314 kernel: random: crng init done May 9 23:57:38.437320 kernel: ACPI: Early table checksum verification disabled May 9 23:57:38.437326 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 9 23:57:38.437332 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437338 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437346 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 9 23:57:38.437352 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437359 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437365 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437373 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437380 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437387 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437393 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 9 23:57:38.437400 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437406 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 9 23:57:38.437436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 9 23:57:38.437455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 9 23:57:38.437461 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 9 23:57:38.437468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 9 23:57:38.437474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 9 23:57:38.442992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 9 23:57:38.443021 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 9 23:57:38.443028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 9 23:57:38.443034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 9 23:57:38.443041 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 9 23:57:38.443047 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 9 23:57:38.443054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 9 23:57:38.443060 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] May 9 23:57:38.443066 kernel: Zone ranges: May 9 23:57:38.443073 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 9 23:57:38.443079 kernel: DMA32 empty May 9 23:57:38.443085 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:57:38.443092 kernel: Movable zone start for each node May 9 23:57:38.443103 kernel: Early memory node ranges May 9 23:57:38.443110 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 9 23:57:38.443117 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 9 23:57:38.443123 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 9 23:57:38.443130 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 9 23:57:38.443138 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 9 23:57:38.443145 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 9 23:57:38.443152 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:57:38.443160 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 9 23:57:38.443167 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 9 23:57:38.443173 kernel: psci: probing for conduit method from ACPI. May 9 23:57:38.443180 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:57:38.443187 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:57:38.443194 kernel: psci: MIGRATE_INFO_TYPE not supported. May 9 23:57:38.443201 kernel: psci: SMC Calling Convention v1.4 May 9 23:57:38.443208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 9 23:57:38.443214 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 9 23:57:38.443223 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:57:38.443230 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:57:38.443237 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:57:38.443243 kernel: Detected PIPT I-cache on CPU0 May 9 23:57:38.443250 kernel: CPU features: detected: GIC system register CPU interface May 9 23:57:38.443257 kernel: CPU features: detected: Hardware dirty bit management May 9 23:57:38.443264 kernel: CPU features: detected: Spectre-BHB May 9 23:57:38.443271 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:57:38.443278 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:57:38.443284 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:57:38.443291 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 9 23:57:38.443299 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:57:38.443306 kernel: alternatives: applying boot alternatives May 9 23:57:38.443314 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:38.443322 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:57:38.443329 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:57:38.443336 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:57:38.443343 kernel: Fallback order for Node 0: 0 May 9 23:57:38.443350 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 9 23:57:38.443356 kernel: Policy zone: Normal May 9 23:57:38.443363 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:57:38.443370 kernel: software IO TLB: area num 2. May 9 23:57:38.443378 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) May 9 23:57:38.443385 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) May 9 23:57:38.443392 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:57:38.443399 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:57:38.443407 kernel: rcu: RCU event tracing is enabled. May 9 23:57:38.443413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:57:38.443421 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:57:38.443428 kernel: Tracing variant of Tasks RCU enabled. May 9 23:57:38.443435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:57:38.443441 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:57:38.443448 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:57:38.443457 kernel: GICv3: 960 SPIs implemented May 9 23:57:38.443463 kernel: GICv3: 0 Extended SPIs implemented May 9 23:57:38.443470 kernel: Root IRQ handler: gic_handle_irq May 9 23:57:38.443477 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:57:38.443493 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 9 23:57:38.443500 kernel: ITS: No ITS available, not enabling LPIs May 9 23:57:38.443507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:57:38.443514 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:57:38.443521 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:57:38.443528 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:57:38.443536 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:57:38.443544 kernel: Console: colour dummy device 80x25 May 9 23:57:38.443552 kernel: printk: console [tty1] enabled May 9 23:57:38.443559 kernel: ACPI: Core revision 20230628 May 9 23:57:38.443566 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:57:38.443574 kernel: pid_max: default: 32768 minimum: 301 May 9 23:57:38.443581 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:57:38.443588 kernel: landlock: Up and running. May 9 23:57:38.443595 kernel: SELinux: Initializing. May 9 23:57:38.443602 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:38.443609 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:38.443618 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:38.443625 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:38.443633 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 9 23:57:38.443639 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 9 23:57:38.443647 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 9 23:57:38.443653 kernel: rcu: Hierarchical SRCU implementation. May 9 23:57:38.443661 kernel: rcu: Max phase no-delay instances is 400. May 9 23:57:38.443675 kernel: Remapping and enabling EFI services. May 9 23:57:38.443682 kernel: smp: Bringing up secondary CPUs ... May 9 23:57:38.443689 kernel: Detected PIPT I-cache on CPU1 May 9 23:57:38.443696 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 9 23:57:38.443705 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:57:38.443713 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:57:38.443720 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:57:38.443727 kernel: SMP: Total of 2 processors activated. May 9 23:57:38.443735 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:57:38.443744 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 9 23:57:38.443751 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:57:38.443759 kernel: CPU features: detected: CRC32 instructions May 9 23:57:38.443766 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:57:38.443774 kernel: CPU features: detected: LSE atomic instructions May 9 23:57:38.443781 kernel: CPU features: detected: Privileged Access Never May 9 23:57:38.443788 kernel: CPU: All CPU(s) started at EL1 May 9 23:57:38.443796 kernel: alternatives: applying system-wide alternatives May 9 23:57:38.443803 kernel: devtmpfs: initialized May 9 23:57:38.443812 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:57:38.443820 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:57:38.443827 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:57:38.443834 kernel: SMBIOS 3.1.0 present. May 9 23:57:38.443842 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 9 23:57:38.443849 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:57:38.443857 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:57:38.443864 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:57:38.443872 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:57:38.443881 kernel: audit: initializing netlink subsys (disabled) May 9 23:57:38.443888 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 9 23:57:38.443895 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:57:38.443903 kernel: cpuidle: using governor menu May 9 23:57:38.443910 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:57:38.443918 kernel: ASID allocator initialised with 32768 entries May 9 23:57:38.443925 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:57:38.443932 kernel: Serial: AMBA PL011 UART driver May 9 23:57:38.443940 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:57:38.443948 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:57:38.443956 kernel: Modules: 509008 pages in range for PLT usage May 9 23:57:38.443964 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:57:38.443971 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:57:38.443979 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:57:38.443986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:57:38.443993 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:57:38.444001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:57:38.444008 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:57:38.444017 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:57:38.444024 kernel: ACPI: Added _OSI(Module Device) May 9 23:57:38.444032 kernel: ACPI: Added _OSI(Processor Device) May 9 23:57:38.444039 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:57:38.444046 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:57:38.444054 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:57:38.444061 kernel: ACPI: Interpreter enabled May 9 23:57:38.444068 kernel: ACPI: Using GIC for interrupt routing May 9 23:57:38.444076 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 9 23:57:38.444084 kernel: printk: console [ttyAMA0] enabled May 9 23:57:38.444092 kernel: printk: bootconsole [pl11] disabled May 9 23:57:38.444099 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 9 23:57:38.444107 kernel: iommu: Default domain type: Translated May 9 23:57:38.444114 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:57:38.444122 kernel: efivars: Registered efivars operations May 9 23:57:38.444129 kernel: vgaarb: loaded May 9 23:57:38.444136 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:57:38.444144 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:57:38.444153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:57:38.444160 kernel: pnp: PnP ACPI init May 9 23:57:38.444167 kernel: pnp: PnP ACPI: found 0 devices May 9 23:57:38.444174 kernel: NET: Registered PF_INET protocol family May 9 23:57:38.444182 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:57:38.444189 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:57:38.444197 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:57:38.444204 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:57:38.444211 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:57:38.444220 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:57:38.444228 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:38.444235 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:38.444243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:57:38.444250 kernel: PCI: CLS 0 bytes, default 64 May 9 23:57:38.444258 kernel: kvm [1]: HYP mode not available May 9 23:57:38.444265 kernel: Initialise system trusted keyrings May 9 23:57:38.444272 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:57:38.444280 kernel: Key type asymmetric registered May 9 23:57:38.444288 kernel: Asymmetric key parser 'x509' registered May 9 23:57:38.444296 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:57:38.444303 kernel: io scheduler mq-deadline registered May 9 23:57:38.444310 kernel: io scheduler kyber registered May 9 23:57:38.444318 kernel: io scheduler bfq registered May 9 23:57:38.444325 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:57:38.444332 kernel: thunder_xcv, ver 1.0 May 9 23:57:38.444340 kernel: thunder_bgx, ver 1.0 May 9 23:57:38.444347 kernel: nicpf, ver 1.0 May 9 23:57:38.444354 kernel: nicvf, ver 1.0 May 9 23:57:38.444574 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:57:38.444654 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:57:37 UTC (1746835057) May 9 23:57:38.444665 kernel: efifb: probing for efifb May 9 23:57:38.444672 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 9 23:57:38.444680 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 9 23:57:38.444687 kernel: efifb: scrolling: redraw May 9 23:57:38.444695 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 9 23:57:38.444705 kernel: Console: switching to colour frame buffer device 128x48 May 9 23:57:38.444713 kernel: fb0: EFI VGA frame buffer device May 9 23:57:38.444720 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 9 23:57:38.444727 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:57:38.444735 kernel: No ACPI PMU IRQ for CPU0 May 9 23:57:38.444742 kernel: No ACPI PMU IRQ for CPU1 May 9 23:57:38.444749 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 9 23:57:38.444757 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:57:38.444764 kernel: watchdog: Hard watchdog permanently disabled May 9 23:57:38.444773 kernel: NET: Registered PF_INET6 protocol family May 9 23:57:38.444780 kernel: Segment Routing with IPv6 May 9 23:57:38.444788 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:57:38.444796 kernel: NET: Registered PF_PACKET protocol family May 9 23:57:38.444803 kernel: Key type dns_resolver registered May 9 23:57:38.444810 kernel: registered taskstats version 1 May 9 23:57:38.444818 kernel: Loading compiled-in X.509 certificates May 9 23:57:38.444825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:57:38.444832 kernel: Key type .fscrypt registered May 9 23:57:38.444841 kernel: Key type fscrypt-provisioning registered May 9 23:57:38.444848 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:57:38.444856 kernel: ima: Allocated hash algorithm: sha1 May 9 23:57:38.444863 kernel: ima: No architecture policies found May 9 23:57:38.444871 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:57:38.444878 kernel: clk: Disabling unused clocks May 9 23:57:38.444885 kernel: Freeing unused kernel memory: 39424K May 9 23:57:38.444893 kernel: Run /init as init process May 9 23:57:38.444900 kernel: with arguments: May 9 23:57:38.444908 kernel: /init May 9 23:57:38.444916 kernel: with environment: May 9 23:57:38.444923 kernel: HOME=/ May 9 23:57:38.444930 kernel: TERM=linux May 9 23:57:38.444938 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:57:38.444948 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:38.444958 systemd[1]: Detected virtualization microsoft. May 9 23:57:38.444965 systemd[1]: Detected architecture arm64. May 9 23:57:38.444975 systemd[1]: Running in initrd. May 9 23:57:38.444983 systemd[1]: No hostname configured, using default hostname. May 9 23:57:38.444991 systemd[1]: Hostname set to . May 9 23:57:38.444999 systemd[1]: Initializing machine ID from random generator. May 9 23:57:38.445007 systemd[1]: Queued start job for default target initrd.target. May 9 23:57:38.445015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:38.445023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:38.445032 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:57:38.445042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:38.445050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:57:38.445058 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:57:38.445068 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:57:38.445076 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:57:38.445084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:38.445092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:38.445101 systemd[1]: Reached target paths.target - Path Units. May 9 23:57:38.445109 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:38.445117 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:38.445125 systemd[1]: Reached target timers.target - Timer Units. May 9 23:57:38.445133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:38.445141 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:38.445149 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:57:38.445157 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:57:38.445167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:38.445175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:38.445183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:38.445191 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:57:38.445199 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:57:38.445207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:38.445215 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:57:38.445223 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:57:38.445231 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:38.445240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:38.445271 systemd-journald[217]: Collecting audit messages is disabled. May 9 23:57:38.445291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:38.445300 systemd-journald[217]: Journal started May 9 23:57:38.445321 systemd-journald[217]: Runtime Journal (/run/log/journal/b40da1703acd42218f649562d383d45c) is 8.0M, max 78.5M, 70.5M free. May 9 23:57:38.446164 systemd-modules-load[218]: Inserted module 'overlay' May 9 23:57:38.477505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:57:38.489974 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:38.490041 kernel: Bridge firewalling registered May 9 23:57:38.490109 systemd-modules-load[218]: Inserted module 'br_netfilter' May 9 23:57:38.497924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:57:38.506527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:38.521073 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:57:38.533156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:38.547465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:38.573816 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:38.584718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:38.611722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:57:38.634735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:57:38.650702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:38.661884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:38.681508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:38.695508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:38.722042 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:57:38.736578 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:57:38.753123 dracut-cmdline[251]: dracut-dracut-053 May 9 23:57:38.759706 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:38.794726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:38.811230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:38.812851 systemd-resolved[254]: Positive Trust Anchors: May 9 23:57:38.812862 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:57:38.812898 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:57:38.815198 systemd-resolved[254]: Defaulting to hostname 'linux'. May 9 23:57:38.822343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:57:38.835726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:38.929498 kernel: SCSI subsystem initialized May 9 23:57:38.937496 kernel: Loading iSCSI transport class v2.0-870. May 9 23:57:38.950499 kernel: iscsi: registered transport (tcp) May 9 23:57:38.965795 kernel: iscsi: registered transport (qla4xxx) May 9 23:57:38.965880 kernel: QLogic iSCSI HBA Driver May 9 23:57:39.001950 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:57:39.016825 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:57:39.054049 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:57:39.054107 kernel: device-mapper: uevent: version 1.0.3 May 9 23:57:39.061801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:57:39.113509 kernel: raid6: neonx8 gen() 15741 MB/s May 9 23:57:39.133516 kernel: raid6: neonx4 gen() 15669 MB/s May 9 23:57:39.153494 kernel: raid6: neonx2 gen() 13378 MB/s May 9 23:57:39.174498 kernel: raid6: neonx1 gen() 10488 MB/s May 9 23:57:39.194492 kernel: raid6: int64x8 gen() 6958 MB/s May 9 23:57:39.214506 kernel: raid6: int64x4 gen() 7340 MB/s May 9 23:57:39.235499 kernel: raid6: int64x2 gen() 6131 MB/s May 9 23:57:39.259395 kernel: raid6: int64x1 gen() 5058 MB/s May 9 23:57:39.259419 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s May 9 23:57:39.284857 kernel: raid6: .... xor() 11923 MB/s, rmw enabled May 9 23:57:39.284904 kernel: raid6: using neon recovery algorithm May 9 23:57:39.294499 kernel: xor: measuring software checksum speed May 9 23:57:39.302388 kernel: 8regs : 18114 MB/sec May 9 23:57:39.302439 kernel: 32regs : 19655 MB/sec May 9 23:57:39.306145 kernel: arm64_neon : 26927 MB/sec May 9 23:57:39.310713 kernel: xor: using function: arm64_neon (26927 MB/sec) May 9 23:57:39.363518 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:57:39.374955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:39.394663 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:39.419933 systemd-udevd[437]: Using default interface naming scheme 'v255'. May 9 23:57:39.426276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:39.445820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:57:39.466036 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation May 9 23:57:39.497884 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:39.520651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:39.560747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:39.581759 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:57:39.612554 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:57:39.625265 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:39.642816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:39.651545 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:39.682712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:57:39.705477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:39.712165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:39.729962 kernel: hv_vmbus: Vmbus version:5.3 May 9 23:57:39.732877 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:39.746371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:39.746622 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:39.788390 kernel: hv_vmbus: registering driver hid_hyperv May 9 23:57:39.788420 kernel: pps_core: LinuxPPS API ver. 1 registered May 9 23:57:39.788430 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 9 23:57:39.760027 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:39.818136 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 9 23:57:39.818310 kernel: hv_vmbus: registering driver hyperv_keyboard May 9 23:57:39.818322 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 9 23:57:39.818332 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 9 23:57:39.818826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:39.930922 kernel: hv_vmbus: registering driver hv_netvsc May 9 23:57:39.930951 kernel: PTP clock support registered May 9 23:57:39.930961 kernel: hv_utils: Registering HyperV Utility Driver May 9 23:57:39.930970 kernel: hv_vmbus: registering driver hv_utils May 9 23:57:39.931004 kernel: hv_vmbus: registering driver hv_storvsc May 9 23:57:39.931014 kernel: hv_utils: Heartbeat IC version 3.0 May 9 23:57:39.941043 kernel: hv_utils: Shutdown IC version 3.2 May 9 23:57:39.941073 kernel: hv_utils: TimeSync IC version 4.0 May 9 23:57:39.941091 kernel: scsi host1: storvsc_host_t May 9 23:57:39.925540 systemd-resolved[254]: Clock change detected. Flushing caches. May 9 23:57:39.972744 kernel: scsi host0: storvsc_host_t May 9 23:57:39.972941 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 9 23:57:39.973039 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 9 23:57:39.963770 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:39.984105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:40.013303 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 9 23:57:40.013546 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 23:57:40.011585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:40.034677 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 9 23:57:40.011821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:40.054812 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 9 23:57:40.055012 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 9 23:57:40.055112 kernel: sd 0:0:0:0: [sda] Write Protect is off May 9 23:57:40.055195 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 9 23:57:40.055276 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 9 23:57:40.020816 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:40.081623 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:57:40.088089 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: VF slot 1 added May 9 23:57:40.070158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:40.109733 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 9 23:57:40.110455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:40.131810 kernel: hv_vmbus: registering driver hv_pci May 9 23:57:40.131871 kernel: hv_pci 0ff17a0a-f1f6-4a88-8492-e41bc4b09550: PCI VMBus probing: Using version 0x10004 May 9 23:57:40.132217 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:40.179291 kernel: hv_pci 0ff17a0a-f1f6-4a88-8492-e41bc4b09550: PCI host bridge to bus f1f6:00 May 9 23:57:40.179463 kernel: pci_bus f1f6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 9 23:57:40.179565 kernel: pci_bus f1f6:00: No busn resource found for root bus, will use [bus 00-ff] May 9 23:57:40.179656 kernel: pci f1f6:00:02.0: [15b3:1018] type 00 class 0x020000 May 9 23:57:40.402860 kernel: pci f1f6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:57:40.411781 kernel: pci f1f6:00:02.0: enabling Extended Tags May 9 23:57:40.437164 kernel: pci f1f6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f1f6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 9 23:57:40.437394 kernel: pci_bus f1f6:00: busn_res: [bus 00-ff] end is updated to 00 May 9 23:57:40.430135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:40.458341 kernel: pci f1f6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:57:40.497645 kernel: mlx5_core f1f6:00:02.0: enabling device (0000 -> 0002) May 9 23:57:40.504657 kernel: mlx5_core f1f6:00:02.0: firmware version: 16.31.2424 May 9 23:57:40.652665 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 9 23:57:40.680476 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (488) May 9 23:57:40.693846 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (487) May 9 23:57:40.709058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 9 23:57:40.728790 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 9 23:57:40.736949 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 9 23:57:40.766003 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:57:40.808125 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 9 23:57:40.929437 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: VF registering: eth1 May 9 23:57:40.929629 kernel: mlx5_core f1f6:00:02.0 eth1: joined to eth0 May 9 23:57:40.939715 kernel: mlx5_core f1f6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 9 23:57:40.956672 kernel: mlx5_core f1f6:00:02.0 enP61942s1: renamed from eth1 May 9 23:57:41.800668 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:57:41.800903 disk-uuid[605]: The operation has completed successfully. May 9 23:57:41.860790 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:57:41.860901 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:57:41.898801 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:57:41.913095 sh[723]: Success May 9 23:57:41.938925 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:57:42.114186 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:57:42.121089 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:57:42.138817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:57:42.180868 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:57:42.180939 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:42.189409 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:57:42.195375 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:57:42.200432 kernel: BTRFS info (device dm-0): using free space tree May 9 23:57:42.572223 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:57:42.578506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:57:42.602924 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:57:42.615826 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:57:42.650127 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:42.650190 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:42.655218 kernel: BTRFS info (device sda6): using free space tree May 9 23:57:42.676160 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:57:42.684478 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:57:42.699663 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:42.707315 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:57:42.724194 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:57:42.763342 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:42.785814 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:57:42.815622 systemd-networkd[907]: lo: Link UP May 9 23:57:42.815665 systemd-networkd[907]: lo: Gained carrier May 9 23:57:42.817249 systemd-networkd[907]: Enumeration completed May 9 23:57:42.817909 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:42.817912 systemd-networkd[907]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:57:42.820717 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:57:42.828525 systemd[1]: Reached target network.target - Network. May 9 23:57:42.895650 kernel: mlx5_core f1f6:00:02.0 enP61942s1: Link up May 9 23:57:43.045671 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: Data path switched to VF: enP61942s1 May 9 23:57:43.045737 systemd-networkd[907]: enP61942s1: Link UP May 9 23:57:43.045817 systemd-networkd[907]: eth0: Link UP May 9 23:57:43.045914 systemd-networkd[907]: eth0: Gained carrier May 9 23:57:43.045923 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:43.058883 systemd-networkd[907]: enP61942s1: Gained carrier May 9 23:57:43.079685 systemd-networkd[907]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 9 23:57:43.417414 ignition[868]: Ignition 2.19.0 May 9 23:57:43.417433 ignition[868]: Stage: fetch-offline May 9 23:57:43.422746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:43.417474 ignition[868]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.417482 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.417585 ignition[868]: parsed url from cmdline: "" May 9 23:57:43.417588 ignition[868]: no config URL provided May 9 23:57:43.417593 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:57:43.455924 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:57:43.417599 ignition[868]: no config at "/usr/lib/ignition/user.ign" May 9 23:57:43.417605 ignition[868]: failed to fetch config: resource requires networking May 9 23:57:43.418058 ignition[868]: Ignition finished successfully May 9 23:57:43.484105 ignition[915]: Ignition 2.19.0 May 9 23:57:43.484113 ignition[915]: Stage: fetch May 9 23:57:43.484328 ignition[915]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.484342 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.484471 ignition[915]: parsed url from cmdline: "" May 9 23:57:43.484475 ignition[915]: no config URL provided May 9 23:57:43.484480 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:57:43.484487 ignition[915]: no config at "/usr/lib/ignition/user.ign" May 9 23:57:43.484514 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 9 23:57:43.589272 ignition[915]: GET result: OK May 9 23:57:43.589366 ignition[915]: config has been read from IMDS userdata May 9 23:57:43.589405 ignition[915]: parsing config with SHA512: ceee5fcc60505e39fe846a72f1e16969cf1d17feb7c844b9993cbc6f3b8682135fd2c3085a84daf2d441464213baf34a9d534cccea9a249c2a9f47e223026028 May 9 23:57:43.593560 unknown[915]: fetched base config from "system" May 9 23:57:43.594030 ignition[915]: fetch: fetch complete May 9 23:57:43.593568 unknown[915]: fetched base config from "system" May 9 23:57:43.594037 ignition[915]: fetch: fetch passed May 9 23:57:43.593581 unknown[915]: fetched user config from "azure" May 9 23:57:43.594088 ignition[915]: Ignition finished successfully May 9 23:57:43.596131 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:57:43.622828 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:57:43.647079 ignition[922]: Ignition 2.19.0 May 9 23:57:43.651054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:57:43.647086 ignition[922]: Stage: kargs May 9 23:57:43.670956 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:57:43.647304 ignition[922]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.647319 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.648456 ignition[922]: kargs: kargs passed May 9 23:57:43.713615 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:57:43.648509 ignition[922]: Ignition finished successfully May 9 23:57:43.726904 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:57:43.695840 ignition[928]: Ignition 2.19.0 May 9 23:57:43.738195 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:57:43.695851 ignition[928]: Stage: disks May 9 23:57:43.752670 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:43.696064 ignition[928]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.764690 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:57:43.696073 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.778814 systemd[1]: Reached target basic.target - Basic System. May 9 23:57:43.712117 ignition[928]: disks: disks passed May 9 23:57:43.801916 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:57:43.712201 ignition[928]: Ignition finished successfully May 9 23:57:43.906706 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 9 23:57:43.919785 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:57:43.941923 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:57:44.005658 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:57:44.006094 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:57:44.016574 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:57:44.058728 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:44.079580 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:57:44.101647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (948) May 9 23:57:44.091613 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 9 23:57:44.110586 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:57:44.162037 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:44.162072 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:44.162082 kernel: BTRFS info (device sda6): using free space tree May 9 23:57:44.110626 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:44.157241 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:57:44.189656 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:57:44.189954 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:57:44.200016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:44.223728 systemd-networkd[907]: eth0: Gained IPv6LL May 9 23:57:44.287758 systemd-networkd[907]: enP61942s1: Gained IPv6LL May 9 23:57:44.714626 coreos-metadata[950]: May 09 23:57:44.714 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 9 23:57:44.723421 coreos-metadata[950]: May 09 23:57:44.723 INFO Fetch successful May 9 23:57:44.723421 coreos-metadata[950]: May 09 23:57:44.723 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 9 23:57:44.741372 coreos-metadata[950]: May 09 23:57:44.740 INFO Fetch successful May 9 23:57:44.757806 coreos-metadata[950]: May 09 23:57:44.756 INFO wrote hostname ci-4081.3.3-n-84ab9604c4 to /sysroot/etc/hostname May 9 23:57:44.765203 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:57:44.887140 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:57:44.911948 initrd-setup-root[984]: cut: /sysroot/etc/group: No such file or directory May 9 23:57:44.921513 initrd-setup-root[991]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:57:44.929863 initrd-setup-root[998]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:57:45.792865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:57:45.807883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:57:45.821960 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:57:45.837537 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:45.844300 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:57:45.865674 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:57:45.881662 ignition[1067]: INFO : Ignition 2.19.0 May 9 23:57:45.881662 ignition[1067]: INFO : Stage: mount May 9 23:57:45.881662 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:45.881662 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:45.912895 ignition[1067]: INFO : mount: mount passed May 9 23:57:45.912895 ignition[1067]: INFO : Ignition finished successfully May 9 23:57:45.886838 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:57:45.917896 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:57:45.937007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:45.976341 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1077) May 9 23:57:45.976396 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:45.983734 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:45.989058 kernel: BTRFS info (device sda6): using free space tree May 9 23:57:45.996665 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:57:45.998934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:46.026904 ignition[1095]: INFO : Ignition 2.19.0 May 9 23:57:46.026904 ignition[1095]: INFO : Stage: files May 9 23:57:46.037105 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:46.037105 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:46.037105 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping May 9 23:57:46.057115 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:57:46.057115 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:57:46.090575 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:57:46.100070 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:57:46.100070 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:57:46.091042 unknown[1095]: wrote ssh authorized keys file for user: core May 9 23:57:46.157575 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:57:46.169272 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 9 23:57:46.272332 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:57:46.746859 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 9 23:57:47.198615 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 23:57:47.425293 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:47.425293 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 23:57:47.453057 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:47.467059 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:47.467059 ignition[1095]: INFO : files: files passed May 9 23:57:47.467059 ignition[1095]: INFO : Ignition finished successfully May 9 23:57:47.467927 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:57:47.526953 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:57:47.547893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:57:47.564205 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:57:47.564316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:57:47.608406 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:47.608406 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:47.628770 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:47.629247 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:47.645138 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:57:47.671940 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:57:47.714690 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:57:47.714822 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:57:47.729243 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:57:47.744194 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:57:47.757393 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:57:47.776207 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:57:47.800533 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:47.820999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:57:47.843394 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:57:47.843535 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:57:47.857738 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:47.873430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:47.888676 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:57:47.902260 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:57:47.902339 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:47.921317 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:57:47.935600 systemd[1]: Stopped target basic.target - Basic System. May 9 23:57:47.947940 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:57:47.960717 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:47.975266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:57:47.990041 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:57:48.003760 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:48.018377 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:57:48.033331 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:57:48.046830 systemd[1]: Stopped target swap.target - Swaps. May 9 23:57:48.058752 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:57:48.058840 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:48.077974 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:48.092081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:48.107166 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:57:48.114896 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:48.123106 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:57:48.123187 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:57:48.145379 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:57:48.145445 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:48.153957 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:57:48.154016 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:57:48.167139 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 9 23:57:48.167192 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:57:48.244509 ignition[1148]: INFO : Ignition 2.19.0 May 9 23:57:48.244509 ignition[1148]: INFO : Stage: umount May 9 23:57:48.244509 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:48.244509 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:48.244509 ignition[1148]: INFO : umount: umount passed May 9 23:57:48.244509 ignition[1148]: INFO : Ignition finished successfully May 9 23:57:48.203863 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:57:48.224574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:57:48.224684 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:48.236841 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:57:48.255163 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:57:48.255243 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:48.267731 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:57:48.267790 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:48.276229 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:57:48.276343 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:57:48.298939 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:57:48.299030 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:57:48.318291 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:57:48.318372 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:57:48.331696 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:57:48.331757 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:57:48.341814 systemd[1]: Stopped target network.target - Network. May 9 23:57:48.351382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:57:48.351470 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:48.367703 systemd[1]: Stopped target paths.target - Path Units. May 9 23:57:48.379414 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:57:48.391702 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:48.401486 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:57:48.413717 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:57:48.428324 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:57:48.428391 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:48.443118 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:57:48.443186 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:48.456863 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:57:48.456922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:57:48.470251 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:57:48.470304 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:57:48.484334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:57:48.496799 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:57:48.509160 systemd-networkd[907]: eth0: DHCPv6 lease lost May 9 23:57:48.511854 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:57:48.512482 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:57:48.512592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:57:48.802756 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: Data path switched from VF: enP61942s1 May 9 23:57:48.526618 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:57:48.526819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:57:48.543039 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:57:48.543111 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:48.578866 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:57:48.585106 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:57:48.585180 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:48.594136 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:57:48.594196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:48.608423 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:57:48.608483 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:57:48.625732 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:57:48.625802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:48.640570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:48.688182 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:57:48.688457 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:48.703525 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:57:48.703584 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:57:48.717597 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:57:48.717645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:48.731090 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:57:48.731150 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:48.752756 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:57:48.752818 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:57:48.765167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:48.765226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:48.817864 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:57:48.833415 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:57:48.833515 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:48.856301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:48.856372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:48.871600 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:57:48.871731 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:57:48.955861 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:57:48.956032 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:57:48.997310 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:57:48.997780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:57:49.009717 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:57:49.022421 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:57:49.022508 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:57:49.051953 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:57:49.175672 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 9 23:57:49.078762 systemd[1]: Switching root. May 9 23:57:49.179947 systemd-journald[217]: Journal stopped May 9 23:57:38.437251 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:57:38.437274 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:57:38.437283 kernel: KASLR enabled May 9 23:57:38.437288 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 9 23:57:38.437296 kernel: printk: bootconsole [pl11] enabled May 9 23:57:38.437301 kernel: efi: EFI v2.7 by EDK II May 9 23:57:38.437308 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 May 9 23:57:38.437314 kernel: random: crng init done May 9 23:57:38.437320 kernel: ACPI: Early table checksum verification disabled May 9 23:57:38.437326 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 9 23:57:38.437332 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437338 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437346 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 9 23:57:38.437352 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437359 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437365 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437373 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437380 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437387 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437393 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 9 23:57:38.437400 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:57:38.437406 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 9 23:57:38.437436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 9 23:57:38.437455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 9 23:57:38.437461 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 9 23:57:38.437468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 9 23:57:38.437474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 9 23:57:38.442992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 9 23:57:38.443021 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 9 23:57:38.443028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 9 23:57:38.443034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 9 23:57:38.443041 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 9 23:57:38.443047 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 9 23:57:38.443054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 9 23:57:38.443060 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] May 9 23:57:38.443066 kernel: Zone ranges: May 9 23:57:38.443073 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 9 23:57:38.443079 kernel: DMA32 empty May 9 23:57:38.443085 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:57:38.443092 kernel: Movable zone start for each node May 9 23:57:38.443103 kernel: Early memory node ranges May 9 23:57:38.443110 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 9 23:57:38.443117 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 9 23:57:38.443123 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 9 23:57:38.443130 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 9 23:57:38.443138 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 9 23:57:38.443145 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 9 23:57:38.443152 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:57:38.443160 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 9 23:57:38.443167 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 9 23:57:38.443173 kernel: psci: probing for conduit method from ACPI. May 9 23:57:38.443180 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:57:38.443187 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:57:38.443194 kernel: psci: MIGRATE_INFO_TYPE not supported. May 9 23:57:38.443201 kernel: psci: SMC Calling Convention v1.4 May 9 23:57:38.443208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 9 23:57:38.443214 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 9 23:57:38.443223 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:57:38.443230 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:57:38.443237 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:57:38.443243 kernel: Detected PIPT I-cache on CPU0 May 9 23:57:38.443250 kernel: CPU features: detected: GIC system register CPU interface May 9 23:57:38.443257 kernel: CPU features: detected: Hardware dirty bit management May 9 23:57:38.443264 kernel: CPU features: detected: Spectre-BHB May 9 23:57:38.443271 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:57:38.443278 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:57:38.443284 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:57:38.443291 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 9 23:57:38.443299 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:57:38.443306 kernel: alternatives: applying boot alternatives May 9 23:57:38.443314 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:38.443322 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:57:38.443329 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:57:38.443336 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:57:38.443343 kernel: Fallback order for Node 0: 0 May 9 23:57:38.443350 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 9 23:57:38.443356 kernel: Policy zone: Normal May 9 23:57:38.443363 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:57:38.443370 kernel: software IO TLB: area num 2. May 9 23:57:38.443378 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) May 9 23:57:38.443385 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) May 9 23:57:38.443392 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:57:38.443399 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:57:38.443407 kernel: rcu: RCU event tracing is enabled. May 9 23:57:38.443413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:57:38.443421 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:57:38.443428 kernel: Tracing variant of Tasks RCU enabled. May 9 23:57:38.443435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:57:38.443441 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:57:38.443448 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:57:38.443457 kernel: GICv3: 960 SPIs implemented May 9 23:57:38.443463 kernel: GICv3: 0 Extended SPIs implemented May 9 23:57:38.443470 kernel: Root IRQ handler: gic_handle_irq May 9 23:57:38.443477 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:57:38.443493 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 9 23:57:38.443500 kernel: ITS: No ITS available, not enabling LPIs May 9 23:57:38.443507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:57:38.443514 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:57:38.443521 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:57:38.443528 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:57:38.443536 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:57:38.443544 kernel: Console: colour dummy device 80x25 May 9 23:57:38.443552 kernel: printk: console [tty1] enabled May 9 23:57:38.443559 kernel: ACPI: Core revision 20230628 May 9 23:57:38.443566 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:57:38.443574 kernel: pid_max: default: 32768 minimum: 301 May 9 23:57:38.443581 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:57:38.443588 kernel: landlock: Up and running. May 9 23:57:38.443595 kernel: SELinux: Initializing. May 9 23:57:38.443602 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:38.443609 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:38.443618 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:38.443625 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:38.443633 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 9 23:57:38.443639 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 9 23:57:38.443647 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 9 23:57:38.443653 kernel: rcu: Hierarchical SRCU implementation. May 9 23:57:38.443661 kernel: rcu: Max phase no-delay instances is 400. May 9 23:57:38.443675 kernel: Remapping and enabling EFI services. May 9 23:57:38.443682 kernel: smp: Bringing up secondary CPUs ... May 9 23:57:38.443689 kernel: Detected PIPT I-cache on CPU1 May 9 23:57:38.443696 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 9 23:57:38.443705 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:57:38.443713 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:57:38.443720 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:57:38.443727 kernel: SMP: Total of 2 processors activated. May 9 23:57:38.443735 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:57:38.443744 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 9 23:57:38.443751 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:57:38.443759 kernel: CPU features: detected: CRC32 instructions May 9 23:57:38.443766 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:57:38.443774 kernel: CPU features: detected: LSE atomic instructions May 9 23:57:38.443781 kernel: CPU features: detected: Privileged Access Never May 9 23:57:38.443788 kernel: CPU: All CPU(s) started at EL1 May 9 23:57:38.443796 kernel: alternatives: applying system-wide alternatives May 9 23:57:38.443803 kernel: devtmpfs: initialized May 9 23:57:38.443812 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:57:38.443820 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:57:38.443827 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:57:38.443834 kernel: SMBIOS 3.1.0 present. May 9 23:57:38.443842 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 9 23:57:38.443849 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:57:38.443857 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:57:38.443864 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:57:38.443872 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:57:38.443881 kernel: audit: initializing netlink subsys (disabled) May 9 23:57:38.443888 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 9 23:57:38.443895 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:57:38.443903 kernel: cpuidle: using governor menu May 9 23:57:38.443910 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:57:38.443918 kernel: ASID allocator initialised with 32768 entries May 9 23:57:38.443925 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:57:38.443932 kernel: Serial: AMBA PL011 UART driver May 9 23:57:38.443940 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:57:38.443948 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:57:38.443956 kernel: Modules: 509008 pages in range for PLT usage May 9 23:57:38.443964 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:57:38.443971 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:57:38.443979 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:57:38.443986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:57:38.443993 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:57:38.444001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:57:38.444008 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:57:38.444017 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:57:38.444024 kernel: ACPI: Added _OSI(Module Device) May 9 23:57:38.444032 kernel: ACPI: Added _OSI(Processor Device) May 9 23:57:38.444039 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:57:38.444046 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:57:38.444054 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:57:38.444061 kernel: ACPI: Interpreter enabled May 9 23:57:38.444068 kernel: ACPI: Using GIC for interrupt routing May 9 23:57:38.444076 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 9 23:57:38.444084 kernel: printk: console [ttyAMA0] enabled May 9 23:57:38.444092 kernel: printk: bootconsole [pl11] disabled May 9 23:57:38.444099 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 9 23:57:38.444107 kernel: iommu: Default domain type: Translated May 9 23:57:38.444114 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:57:38.444122 kernel: efivars: Registered efivars operations May 9 23:57:38.444129 kernel: vgaarb: loaded May 9 23:57:38.444136 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:57:38.444144 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:57:38.444153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:57:38.444160 kernel: pnp: PnP ACPI init May 9 23:57:38.444167 kernel: pnp: PnP ACPI: found 0 devices May 9 23:57:38.444174 kernel: NET: Registered PF_INET protocol family May 9 23:57:38.444182 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:57:38.444189 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:57:38.444197 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:57:38.444204 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:57:38.444211 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:57:38.444220 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:57:38.444228 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:38.444235 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:38.444243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:57:38.444250 kernel: PCI: CLS 0 bytes, default 64 May 9 23:57:38.444258 kernel: kvm [1]: HYP mode not available May 9 23:57:38.444265 kernel: Initialise system trusted keyrings May 9 23:57:38.444272 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:57:38.444280 kernel: Key type asymmetric registered May 9 23:57:38.444288 kernel: Asymmetric key parser 'x509' registered May 9 23:57:38.444296 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:57:38.444303 kernel: io scheduler mq-deadline registered May 9 23:57:38.444310 kernel: io scheduler kyber registered May 9 23:57:38.444318 kernel: io scheduler bfq registered May 9 23:57:38.444325 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:57:38.444332 kernel: thunder_xcv, ver 1.0 May 9 23:57:38.444340 kernel: thunder_bgx, ver 1.0 May 9 23:57:38.444347 kernel: nicpf, ver 1.0 May 9 23:57:38.444354 kernel: nicvf, ver 1.0 May 9 23:57:38.444574 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:57:38.444654 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:57:37 UTC (1746835057) May 9 23:57:38.444665 kernel: efifb: probing for efifb May 9 23:57:38.444672 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 9 23:57:38.444680 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 9 23:57:38.444687 kernel: efifb: scrolling: redraw May 9 23:57:38.444695 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 9 23:57:38.444705 kernel: Console: switching to colour frame buffer device 128x48 May 9 23:57:38.444713 kernel: fb0: EFI VGA frame buffer device May 9 23:57:38.444720 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 9 23:57:38.444727 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:57:38.444735 kernel: No ACPI PMU IRQ for CPU0 May 9 23:57:38.444742 kernel: No ACPI PMU IRQ for CPU1 May 9 23:57:38.444749 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 9 23:57:38.444757 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:57:38.444764 kernel: watchdog: Hard watchdog permanently disabled May 9 23:57:38.444773 kernel: NET: Registered PF_INET6 protocol family May 9 23:57:38.444780 kernel: Segment Routing with IPv6 May 9 23:57:38.444788 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:57:38.444796 kernel: NET: Registered PF_PACKET protocol family May 9 23:57:38.444803 kernel: Key type dns_resolver registered May 9 23:57:38.444810 kernel: registered taskstats version 1 May 9 23:57:38.444818 kernel: Loading compiled-in X.509 certificates May 9 23:57:38.444825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:57:38.444832 kernel: Key type .fscrypt registered May 9 23:57:38.444841 kernel: Key type fscrypt-provisioning registered May 9 23:57:38.444848 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:57:38.444856 kernel: ima: Allocated hash algorithm: sha1 May 9 23:57:38.444863 kernel: ima: No architecture policies found May 9 23:57:38.444871 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:57:38.444878 kernel: clk: Disabling unused clocks May 9 23:57:38.444885 kernel: Freeing unused kernel memory: 39424K May 9 23:57:38.444893 kernel: Run /init as init process May 9 23:57:38.444900 kernel: with arguments: May 9 23:57:38.444908 kernel: /init May 9 23:57:38.444916 kernel: with environment: May 9 23:57:38.444923 kernel: HOME=/ May 9 23:57:38.444930 kernel: TERM=linux May 9 23:57:38.444938 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:57:38.444948 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:38.444958 systemd[1]: Detected virtualization microsoft. May 9 23:57:38.444965 systemd[1]: Detected architecture arm64. May 9 23:57:38.444975 systemd[1]: Running in initrd. May 9 23:57:38.444983 systemd[1]: No hostname configured, using default hostname. May 9 23:57:38.444991 systemd[1]: Hostname set to . May 9 23:57:38.444999 systemd[1]: Initializing machine ID from random generator. May 9 23:57:38.445007 systemd[1]: Queued start job for default target initrd.target. May 9 23:57:38.445015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:38.445023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:38.445032 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:57:38.445042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:38.445050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:57:38.445058 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:57:38.445068 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:57:38.445076 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:57:38.445084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:38.445092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:38.445101 systemd[1]: Reached target paths.target - Path Units. May 9 23:57:38.445109 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:38.445117 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:38.445125 systemd[1]: Reached target timers.target - Timer Units. May 9 23:57:38.445133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:38.445141 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:38.445149 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:57:38.445157 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:57:38.445167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:38.445175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:38.445183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:38.445191 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:57:38.445199 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:57:38.445207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:38.445215 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:57:38.445223 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:57:38.445231 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:38.445240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:38.445271 systemd-journald[217]: Collecting audit messages is disabled. May 9 23:57:38.445291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:38.445300 systemd-journald[217]: Journal started May 9 23:57:38.445321 systemd-journald[217]: Runtime Journal (/run/log/journal/b40da1703acd42218f649562d383d45c) is 8.0M, max 78.5M, 70.5M free. May 9 23:57:38.446164 systemd-modules-load[218]: Inserted module 'overlay' May 9 23:57:38.477505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:57:38.489974 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:38.490041 kernel: Bridge firewalling registered May 9 23:57:38.490109 systemd-modules-load[218]: Inserted module 'br_netfilter' May 9 23:57:38.497924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:57:38.506527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:38.521073 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:57:38.533156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:38.547465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:38.573816 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:38.584718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:38.611722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:57:38.634735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:57:38.650702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:38.661884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:38.681508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:38.695508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:38.722042 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:57:38.736578 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:57:38.753123 dracut-cmdline[251]: dracut-dracut-053 May 9 23:57:38.759706 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:38.794726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:38.811230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:38.812851 systemd-resolved[254]: Positive Trust Anchors: May 9 23:57:38.812862 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:57:38.812898 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:57:38.815198 systemd-resolved[254]: Defaulting to hostname 'linux'. May 9 23:57:38.822343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:57:38.835726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:38.929498 kernel: SCSI subsystem initialized May 9 23:57:38.937496 kernel: Loading iSCSI transport class v2.0-870. May 9 23:57:38.950499 kernel: iscsi: registered transport (tcp) May 9 23:57:38.965795 kernel: iscsi: registered transport (qla4xxx) May 9 23:57:38.965880 kernel: QLogic iSCSI HBA Driver May 9 23:57:39.001950 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:57:39.016825 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:57:39.054049 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:57:39.054107 kernel: device-mapper: uevent: version 1.0.3 May 9 23:57:39.061801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:57:39.113509 kernel: raid6: neonx8 gen() 15741 MB/s May 9 23:57:39.133516 kernel: raid6: neonx4 gen() 15669 MB/s May 9 23:57:39.153494 kernel: raid6: neonx2 gen() 13378 MB/s May 9 23:57:39.174498 kernel: raid6: neonx1 gen() 10488 MB/s May 9 23:57:39.194492 kernel: raid6: int64x8 gen() 6958 MB/s May 9 23:57:39.214506 kernel: raid6: int64x4 gen() 7340 MB/s May 9 23:57:39.235499 kernel: raid6: int64x2 gen() 6131 MB/s May 9 23:57:39.259395 kernel: raid6: int64x1 gen() 5058 MB/s May 9 23:57:39.259419 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s May 9 23:57:39.284857 kernel: raid6: .... xor() 11923 MB/s, rmw enabled May 9 23:57:39.284904 kernel: raid6: using neon recovery algorithm May 9 23:57:39.294499 kernel: xor: measuring software checksum speed May 9 23:57:39.302388 kernel: 8regs : 18114 MB/sec May 9 23:57:39.302439 kernel: 32regs : 19655 MB/sec May 9 23:57:39.306145 kernel: arm64_neon : 26927 MB/sec May 9 23:57:39.310713 kernel: xor: using function: arm64_neon (26927 MB/sec) May 9 23:57:39.363518 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:57:39.374955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:39.394663 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:39.419933 systemd-udevd[437]: Using default interface naming scheme 'v255'. May 9 23:57:39.426276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:39.445820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:57:39.466036 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation May 9 23:57:39.497884 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:39.520651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:39.560747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:39.581759 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:57:39.612554 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:57:39.625265 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:39.642816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:39.651545 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:39.682712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:57:39.705477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:39.712165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:39.729962 kernel: hv_vmbus: Vmbus version:5.3 May 9 23:57:39.732877 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:39.746371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:39.746622 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:39.788390 kernel: hv_vmbus: registering driver hid_hyperv May 9 23:57:39.788420 kernel: pps_core: LinuxPPS API ver. 1 registered May 9 23:57:39.788430 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 9 23:57:39.760027 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:39.818136 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 9 23:57:39.818310 kernel: hv_vmbus: registering driver hyperv_keyboard May 9 23:57:39.818322 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 9 23:57:39.818332 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 9 23:57:39.818826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:39.930922 kernel: hv_vmbus: registering driver hv_netvsc May 9 23:57:39.930951 kernel: PTP clock support registered May 9 23:57:39.930961 kernel: hv_utils: Registering HyperV Utility Driver May 9 23:57:39.930970 kernel: hv_vmbus: registering driver hv_utils May 9 23:57:39.931004 kernel: hv_vmbus: registering driver hv_storvsc May 9 23:57:39.931014 kernel: hv_utils: Heartbeat IC version 3.0 May 9 23:57:39.941043 kernel: hv_utils: Shutdown IC version 3.2 May 9 23:57:39.941073 kernel: hv_utils: TimeSync IC version 4.0 May 9 23:57:39.941091 kernel: scsi host1: storvsc_host_t May 9 23:57:39.925540 systemd-resolved[254]: Clock change detected. Flushing caches. May 9 23:57:39.972744 kernel: scsi host0: storvsc_host_t May 9 23:57:39.972941 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 9 23:57:39.973039 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 9 23:57:39.963770 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:39.984105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:40.013303 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 9 23:57:40.013546 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 23:57:40.011585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:40.034677 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 9 23:57:40.011821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:40.054812 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 9 23:57:40.055012 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 9 23:57:40.055112 kernel: sd 0:0:0:0: [sda] Write Protect is off May 9 23:57:40.055195 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 9 23:57:40.055276 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 9 23:57:40.020816 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:40.081623 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:57:40.088089 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: VF slot 1 added May 9 23:57:40.070158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:40.109733 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 9 23:57:40.110455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:40.131810 kernel: hv_vmbus: registering driver hv_pci May 9 23:57:40.131871 kernel: hv_pci 0ff17a0a-f1f6-4a88-8492-e41bc4b09550: PCI VMBus probing: Using version 0x10004 May 9 23:57:40.132217 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:40.179291 kernel: hv_pci 0ff17a0a-f1f6-4a88-8492-e41bc4b09550: PCI host bridge to bus f1f6:00 May 9 23:57:40.179463 kernel: pci_bus f1f6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 9 23:57:40.179565 kernel: pci_bus f1f6:00: No busn resource found for root bus, will use [bus 00-ff] May 9 23:57:40.179656 kernel: pci f1f6:00:02.0: [15b3:1018] type 00 class 0x020000 May 9 23:57:40.402860 kernel: pci f1f6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:57:40.411781 kernel: pci f1f6:00:02.0: enabling Extended Tags May 9 23:57:40.437164 kernel: pci f1f6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f1f6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 9 23:57:40.437394 kernel: pci_bus f1f6:00: busn_res: [bus 00-ff] end is updated to 00 May 9 23:57:40.430135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:40.458341 kernel: pci f1f6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:57:40.497645 kernel: mlx5_core f1f6:00:02.0: enabling device (0000 -> 0002) May 9 23:57:40.504657 kernel: mlx5_core f1f6:00:02.0: firmware version: 16.31.2424 May 9 23:57:40.652665 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 9 23:57:40.680476 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (488) May 9 23:57:40.693846 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (487) May 9 23:57:40.709058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 9 23:57:40.728790 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 9 23:57:40.736949 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 9 23:57:40.766003 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:57:40.808125 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 9 23:57:40.929437 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: VF registering: eth1 May 9 23:57:40.929629 kernel: mlx5_core f1f6:00:02.0 eth1: joined to eth0 May 9 23:57:40.939715 kernel: mlx5_core f1f6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 9 23:57:40.956672 kernel: mlx5_core f1f6:00:02.0 enP61942s1: renamed from eth1 May 9 23:57:41.800668 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:57:41.800903 disk-uuid[605]: The operation has completed successfully. May 9 23:57:41.860790 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:57:41.860901 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:57:41.898801 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:57:41.913095 sh[723]: Success May 9 23:57:41.938925 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:57:42.114186 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:57:42.121089 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:57:42.138817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:57:42.180868 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:57:42.180939 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:42.189409 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:57:42.195375 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:57:42.200432 kernel: BTRFS info (device dm-0): using free space tree May 9 23:57:42.572223 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:57:42.578506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:57:42.602924 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:57:42.615826 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:57:42.650127 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:42.650190 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:42.655218 kernel: BTRFS info (device sda6): using free space tree May 9 23:57:42.676160 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:57:42.684478 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:57:42.699663 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:42.707315 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:57:42.724194 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:57:42.763342 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:42.785814 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:57:42.815622 systemd-networkd[907]: lo: Link UP May 9 23:57:42.815665 systemd-networkd[907]: lo: Gained carrier May 9 23:57:42.817249 systemd-networkd[907]: Enumeration completed May 9 23:57:42.817909 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:42.817912 systemd-networkd[907]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:57:42.820717 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:57:42.828525 systemd[1]: Reached target network.target - Network. May 9 23:57:42.895650 kernel: mlx5_core f1f6:00:02.0 enP61942s1: Link up May 9 23:57:43.045671 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: Data path switched to VF: enP61942s1 May 9 23:57:43.045737 systemd-networkd[907]: enP61942s1: Link UP May 9 23:57:43.045817 systemd-networkd[907]: eth0: Link UP May 9 23:57:43.045914 systemd-networkd[907]: eth0: Gained carrier May 9 23:57:43.045923 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:43.058883 systemd-networkd[907]: enP61942s1: Gained carrier May 9 23:57:43.079685 systemd-networkd[907]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 9 23:57:43.417414 ignition[868]: Ignition 2.19.0 May 9 23:57:43.417433 ignition[868]: Stage: fetch-offline May 9 23:57:43.422746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:43.417474 ignition[868]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.417482 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.417585 ignition[868]: parsed url from cmdline: "" May 9 23:57:43.417588 ignition[868]: no config URL provided May 9 23:57:43.417593 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:57:43.455924 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:57:43.417599 ignition[868]: no config at "/usr/lib/ignition/user.ign" May 9 23:57:43.417605 ignition[868]: failed to fetch config: resource requires networking May 9 23:57:43.418058 ignition[868]: Ignition finished successfully May 9 23:57:43.484105 ignition[915]: Ignition 2.19.0 May 9 23:57:43.484113 ignition[915]: Stage: fetch May 9 23:57:43.484328 ignition[915]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.484342 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.484471 ignition[915]: parsed url from cmdline: "" May 9 23:57:43.484475 ignition[915]: no config URL provided May 9 23:57:43.484480 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:57:43.484487 ignition[915]: no config at "/usr/lib/ignition/user.ign" May 9 23:57:43.484514 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 9 23:57:43.589272 ignition[915]: GET result: OK May 9 23:57:43.589366 ignition[915]: config has been read from IMDS userdata May 9 23:57:43.589405 ignition[915]: parsing config with SHA512: ceee5fcc60505e39fe846a72f1e16969cf1d17feb7c844b9993cbc6f3b8682135fd2c3085a84daf2d441464213baf34a9d534cccea9a249c2a9f47e223026028 May 9 23:57:43.593560 unknown[915]: fetched base config from "system" May 9 23:57:43.594030 ignition[915]: fetch: fetch complete May 9 23:57:43.593568 unknown[915]: fetched base config from "system" May 9 23:57:43.594037 ignition[915]: fetch: fetch passed May 9 23:57:43.593581 unknown[915]: fetched user config from "azure" May 9 23:57:43.594088 ignition[915]: Ignition finished successfully May 9 23:57:43.596131 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:57:43.622828 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:57:43.647079 ignition[922]: Ignition 2.19.0 May 9 23:57:43.651054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:57:43.647086 ignition[922]: Stage: kargs May 9 23:57:43.670956 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:57:43.647304 ignition[922]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.647319 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.648456 ignition[922]: kargs: kargs passed May 9 23:57:43.713615 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:57:43.648509 ignition[922]: Ignition finished successfully May 9 23:57:43.726904 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:57:43.695840 ignition[928]: Ignition 2.19.0 May 9 23:57:43.738195 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:57:43.695851 ignition[928]: Stage: disks May 9 23:57:43.752670 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:43.696064 ignition[928]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:43.764690 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:57:43.696073 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:43.778814 systemd[1]: Reached target basic.target - Basic System. May 9 23:57:43.712117 ignition[928]: disks: disks passed May 9 23:57:43.801916 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:57:43.712201 ignition[928]: Ignition finished successfully May 9 23:57:43.906706 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 9 23:57:43.919785 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:57:43.941923 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:57:44.005658 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:57:44.006094 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:57:44.016574 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:57:44.058728 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:44.079580 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:57:44.101647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (948) May 9 23:57:44.091613 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 9 23:57:44.110586 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:57:44.162037 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:44.162072 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:44.162082 kernel: BTRFS info (device sda6): using free space tree May 9 23:57:44.110626 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:44.157241 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:57:44.189656 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:57:44.189954 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:57:44.200016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:44.223728 systemd-networkd[907]: eth0: Gained IPv6LL May 9 23:57:44.287758 systemd-networkd[907]: enP61942s1: Gained IPv6LL May 9 23:57:44.714626 coreos-metadata[950]: May 09 23:57:44.714 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 9 23:57:44.723421 coreos-metadata[950]: May 09 23:57:44.723 INFO Fetch successful May 9 23:57:44.723421 coreos-metadata[950]: May 09 23:57:44.723 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 9 23:57:44.741372 coreos-metadata[950]: May 09 23:57:44.740 INFO Fetch successful May 9 23:57:44.757806 coreos-metadata[950]: May 09 23:57:44.756 INFO wrote hostname ci-4081.3.3-n-84ab9604c4 to /sysroot/etc/hostname May 9 23:57:44.765203 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:57:44.887140 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:57:44.911948 initrd-setup-root[984]: cut: /sysroot/etc/group: No such file or directory May 9 23:57:44.921513 initrd-setup-root[991]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:57:44.929863 initrd-setup-root[998]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:57:45.792865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:57:45.807883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:57:45.821960 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:57:45.837537 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:45.844300 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:57:45.865674 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:57:45.881662 ignition[1067]: INFO : Ignition 2.19.0 May 9 23:57:45.881662 ignition[1067]: INFO : Stage: mount May 9 23:57:45.881662 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:45.881662 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:45.912895 ignition[1067]: INFO : mount: mount passed May 9 23:57:45.912895 ignition[1067]: INFO : Ignition finished successfully May 9 23:57:45.886838 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:57:45.917896 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:57:45.937007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:45.976341 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1077) May 9 23:57:45.976396 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:45.983734 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:45.989058 kernel: BTRFS info (device sda6): using free space tree May 9 23:57:45.996665 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:57:45.998934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:46.026904 ignition[1095]: INFO : Ignition 2.19.0 May 9 23:57:46.026904 ignition[1095]: INFO : Stage: files May 9 23:57:46.037105 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:46.037105 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:46.037105 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping May 9 23:57:46.057115 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:57:46.057115 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:57:46.090575 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:57:46.100070 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:57:46.100070 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:57:46.091042 unknown[1095]: wrote ssh authorized keys file for user: core May 9 23:57:46.157575 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:57:46.169272 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 9 23:57:46.272332 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:57:46.746859 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:46.758295 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 9 23:57:47.198615 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 23:57:47.425293 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:47.425293 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 23:57:47.453057 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:57:47.467059 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:47.467059 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:47.467059 ignition[1095]: INFO : files: files passed May 9 23:57:47.467059 ignition[1095]: INFO : Ignition finished successfully May 9 23:57:47.467927 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:57:47.526953 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:57:47.547893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:57:47.564205 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:57:47.564316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:57:47.608406 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:47.608406 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:47.628770 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:47.629247 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:47.645138 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:57:47.671940 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:57:47.714690 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:57:47.714822 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:57:47.729243 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:57:47.744194 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:57:47.757393 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:57:47.776207 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:57:47.800533 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:47.820999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:57:47.843394 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:57:47.843535 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:57:47.857738 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:47.873430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:47.888676 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:57:47.902260 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:57:47.902339 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:47.921317 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:57:47.935600 systemd[1]: Stopped target basic.target - Basic System. May 9 23:57:47.947940 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:57:47.960717 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:47.975266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:57:47.990041 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:57:48.003760 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:48.018377 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:57:48.033331 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:57:48.046830 systemd[1]: Stopped target swap.target - Swaps. May 9 23:57:48.058752 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:57:48.058840 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:48.077974 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:48.092081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:48.107166 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:57:48.114896 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:48.123106 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:57:48.123187 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:57:48.145379 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:57:48.145445 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:48.153957 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:57:48.154016 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:57:48.167139 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 9 23:57:48.167192 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:57:48.244509 ignition[1148]: INFO : Ignition 2.19.0 May 9 23:57:48.244509 ignition[1148]: INFO : Stage: umount May 9 23:57:48.244509 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:48.244509 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:57:48.244509 ignition[1148]: INFO : umount: umount passed May 9 23:57:48.244509 ignition[1148]: INFO : Ignition finished successfully May 9 23:57:48.203863 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:57:48.224574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:57:48.224684 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:48.236841 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:57:48.255163 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:57:48.255243 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:48.267731 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:57:48.267790 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:48.276229 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:57:48.276343 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:57:48.298939 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:57:48.299030 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:57:48.318291 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:57:48.318372 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:57:48.331696 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:57:48.331757 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:57:48.341814 systemd[1]: Stopped target network.target - Network. May 9 23:57:48.351382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:57:48.351470 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:48.367703 systemd[1]: Stopped target paths.target - Path Units. May 9 23:57:48.379414 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:57:48.391702 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:48.401486 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:57:48.413717 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:57:48.428324 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:57:48.428391 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:48.443118 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:57:48.443186 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:48.456863 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:57:48.456922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:57:48.470251 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:57:48.470304 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:57:48.484334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:57:48.496799 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:57:48.509160 systemd-networkd[907]: eth0: DHCPv6 lease lost May 9 23:57:48.511854 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:57:48.512482 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:57:48.512592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:57:48.802756 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: Data path switched from VF: enP61942s1 May 9 23:57:48.526618 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:57:48.526819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:57:48.543039 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:57:48.543111 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:48.578866 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:57:48.585106 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:57:48.585180 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:48.594136 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:57:48.594196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:48.608423 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:57:48.608483 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:57:48.625732 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:57:48.625802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:48.640570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:48.688182 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:57:48.688457 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:48.703525 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:57:48.703584 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:57:48.717597 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:57:48.717645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:48.731090 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:57:48.731150 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:48.752756 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:57:48.752818 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:57:48.765167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:48.765226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:48.817864 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:57:48.833415 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:57:48.833515 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:48.856301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:48.856372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:48.871600 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:57:48.871731 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:57:48.955861 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:57:48.956032 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:57:48.997310 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:57:48.997780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:57:49.009717 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:57:49.022421 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:57:49.022508 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:57:49.051953 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:57:49.175672 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 9 23:57:49.078762 systemd[1]: Switching root. May 9 23:57:49.179947 systemd-journald[217]: Journal stopped May 9 23:57:54.609902 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:57:54.609928 kernel: SELinux: policy capability open_perms=1 May 9 23:57:54.609939 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:57:54.609946 kernel: SELinux: policy capability always_check_network=0 May 9 23:57:54.609956 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:57:54.609964 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:57:54.609973 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:57:54.609981 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:57:54.609989 kernel: audit: type=1403 audit(1746835070.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:57:54.609998 systemd[1]: Successfully loaded SELinux policy in 116.750ms. May 9 23:57:54.610010 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.941ms. May 9 23:57:54.610020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:54.610029 systemd[1]: Detected virtualization microsoft. May 9 23:57:54.610037 systemd[1]: Detected architecture arm64. May 9 23:57:54.610046 systemd[1]: Detected first boot. May 9 23:57:54.610057 systemd[1]: Hostname set to . May 9 23:57:54.610066 systemd[1]: Initializing machine ID from random generator. May 9 23:57:54.610074 zram_generator::config[1189]: No configuration found. May 9 23:57:54.610085 systemd[1]: Populated /etc with preset unit settings. May 9 23:57:54.610096 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:57:54.610105 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:57:54.610114 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:57:54.610125 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:57:54.610134 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:57:54.610144 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:57:54.610153 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:57:54.610162 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:57:54.610171 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:57:54.610181 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:57:54.610191 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:57:54.610201 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:54.610210 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:54.610219 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:57:54.610228 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:57:54.610238 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:57:54.610247 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:54.610256 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 23:57:54.610266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:54.610275 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:57:54.610285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:57:54.610297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:57:54.610306 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:57:54.610316 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:54.610325 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:54.610334 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:54.610345 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:54.610354 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:57:54.610364 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:57:54.610373 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:54.610382 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:54.610392 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:54.610403 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:57:54.610412 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:57:54.610422 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:57:54.610431 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:57:54.610440 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:57:54.610450 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:57:54.610459 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:57:54.610471 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:57:54.610480 systemd[1]: Reached target machines.target - Containers. May 9 23:57:54.610490 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:57:54.610500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:57:54.610510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:54.610519 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:57:54.610529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:57:54.610538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:57:54.610549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:57:54.610559 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:57:54.610568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:57:54.610578 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:57:54.610588 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:57:54.610598 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:57:54.610607 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:57:54.610616 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:57:54.610627 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:54.610647 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:54.610658 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:57:54.610667 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:57:54.610694 systemd-journald[1268]: Collecting audit messages is disabled. May 9 23:57:54.610716 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:54.610727 systemd-journald[1268]: Journal started May 9 23:57:54.610748 systemd-journald[1268]: Runtime Journal (/run/log/journal/ff4ef3678aef4a1dae933e31e6e08016) is 8.0M, max 78.5M, 70.5M free. May 9 23:57:53.424706 systemd[1]: Queued start job for default target multi-user.target. May 9 23:57:53.619023 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 9 23:57:53.619414 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:57:53.619790 systemd[1]: systemd-journald.service: Consumed 3.675s CPU time. May 9 23:57:54.614839 kernel: fuse: init (API version 7.39) May 9 23:57:54.620774 kernel: loop: module loaded May 9 23:57:54.629201 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:57:54.640077 systemd[1]: Stopped verity-setup.service. May 9 23:57:54.660857 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:54.661768 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:57:54.668182 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:57:54.674890 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:57:54.683232 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:57:54.695866 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:57:54.696662 kernel: ACPI: bus type drm_connector registered May 9 23:57:54.705880 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:57:54.712324 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:54.721244 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:57:54.721406 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:57:54.728877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:57:54.729018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:57:54.736336 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:57:54.736480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:57:54.743298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:57:54.743448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:57:54.751461 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:57:54.751595 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:57:54.758393 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:57:54.758541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:57:54.765254 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:54.772219 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:57:54.781227 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:57:54.788853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:54.807856 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:57:54.825739 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:57:54.833276 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:57:54.839924 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:57:54.839966 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:54.846933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:57:54.864847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:57:54.873197 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:57:54.879148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:57:54.935872 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:57:54.943367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:57:54.951219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:57:54.954793 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:57:54.967063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:57:54.968534 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:54.979111 systemd-journald[1268]: Time spent on flushing to /var/log/journal/ff4ef3678aef4a1dae933e31e6e08016 is 25.759ms for 892 entries. May 9 23:57:54.979111 systemd-journald[1268]: System Journal (/var/log/journal/ff4ef3678aef4a1dae933e31e6e08016) is 8.0M, max 2.6G, 2.6G free. May 9 23:57:55.043776 systemd-journald[1268]: Received client request to flush runtime journal. May 9 23:57:54.987045 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:57:54.996812 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:57:55.009491 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:57:55.017421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:57:55.024625 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:57:55.036070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:57:55.044818 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:57:55.052659 kernel: loop0: detected capacity change from 0 to 114432 May 9 23:57:55.056624 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:57:55.070342 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:57:55.081950 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:57:55.090021 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:57:55.096368 udevadm[1324]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 23:57:55.344181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:55.482580 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:57:55.494931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:55.983946 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 9 23:57:55.983963 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 9 23:57:55.989197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:56.476199 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:57:56.476964 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:57:56.594810 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:57:56.745684 kernel: loop1: detected capacity change from 0 to 114328 May 9 23:57:57.693830 kernel: loop2: detected capacity change from 0 to 31320 May 9 23:57:58.967785 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:57:58.981868 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:59.000675 kernel: loop3: detected capacity change from 0 to 201592 May 9 23:57:59.025784 systemd-udevd[1348]: Using default interface naming scheme 'v255'. May 9 23:57:59.028756 kernel: loop4: detected capacity change from 0 to 114432 May 9 23:57:59.041741 kernel: loop5: detected capacity change from 0 to 114328 May 9 23:57:59.053663 kernel: loop6: detected capacity change from 0 to 31320 May 9 23:57:59.063688 kernel: loop7: detected capacity change from 0 to 201592 May 9 23:57:59.074350 (sd-merge)[1350]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 9 23:57:59.074870 (sd-merge)[1350]: Merged extensions into '/usr'. May 9 23:57:59.078234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:59.109167 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:57:59.109384 systemd[1]: Reloading... May 9 23:57:59.188563 zram_generator::config[1396]: No configuration found. May 9 23:57:59.363189 kernel: hv_vmbus: registering driver hyperv_fb May 9 23:57:59.363321 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 9 23:57:59.371060 kernel: hv_vmbus: registering driver hv_balloon May 9 23:57:59.371169 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 9 23:57:59.383609 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 9 23:57:59.383747 kernel: hv_balloon: Memory hot add disabled on ARM64 May 9 23:57:59.383810 kernel: mousedev: PS/2 mouse device common for all mice May 9 23:57:59.403882 kernel: Console: switching to colour dummy device 80x25 May 9 23:57:59.412913 kernel: Console: switching to colour frame buffer device 128x48 May 9 23:57:59.725141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:57:59.800796 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 23:57:59.803781 systemd[1]: Reloading finished in 693 ms. May 9 23:57:59.828679 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1362) May 9 23:57:59.853357 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:57:59.901322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 9 23:57:59.923969 systemd[1]: Starting ensure-sysext.service... May 9 23:57:59.929685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:57:59.938317 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:57:59.946604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:57:59.956034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:59.972824 systemd[1]: Reloading requested from client PID 1505 ('systemctl') (unit ensure-sysext.service)... May 9 23:57:59.972843 systemd[1]: Reloading... May 9 23:57:59.982753 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:57:59.983668 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:57:59.984850 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:57:59.985313 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. May 9 23:57:59.985442 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. May 9 23:58:00.059733 zram_generator::config[1543]: No configuration found. May 9 23:58:00.083471 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:58:00.083482 systemd-tmpfiles[1508]: Skipping /boot May 9 23:58:00.092104 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:58:00.092773 systemd-tmpfiles[1508]: Skipping /boot May 9 23:58:00.190987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:00.267327 systemd[1]: Reloading finished in 294 ms. May 9 23:58:00.294271 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:58:00.303716 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:58:00.317152 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:58:00.337994 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:58:00.345512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:58:00.355011 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:58:00.363970 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:58:00.379544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:58:00.388391 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:58:00.396785 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:58:00.411454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:58:00.419991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:58:00.433298 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:58:00.442086 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:58:00.454326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:58:00.460203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:58:00.460404 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:58:00.467103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:58:00.468684 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:58:00.475611 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:58:00.475772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:58:00.485501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:58:00.485665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:58:00.493883 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:58:00.495170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:58:00.503718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:58:00.503925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:58:00.514687 systemd[1]: Finished ensure-sysext.service. May 9 23:58:00.523487 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:58:00.523571 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:58:00.529818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:58:00.559696 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:58:00.571163 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:58:00.605188 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:58:00.612169 lvm[1605]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:58:00.646510 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:58:00.655174 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:58:00.666827 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:58:00.681316 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:58:00.708429 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:58:00.994219 systemd-resolved[1607]: Positive Trust Anchors: May 9 23:58:00.994240 systemd-resolved[1607]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:58:00.994273 systemd-resolved[1607]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:58:01.095460 systemd-resolved[1607]: Using system hostname 'ci-4081.3.3-n-84ab9604c4'. May 9 23:58:01.097304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:58:01.103069 systemd-networkd[1507]: lo: Link UP May 9 23:58:01.103084 systemd-networkd[1507]: lo: Gained carrier May 9 23:58:01.103942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:58:01.105558 systemd-networkd[1507]: Enumeration completed May 9 23:58:01.106061 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:58:01.106132 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:58:01.111472 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:58:01.120459 systemd[1]: Reached target network.target - Network. May 9 23:58:01.133881 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:58:01.189656 kernel: mlx5_core f1f6:00:02.0 enP61942s1: Link up May 9 23:58:01.236660 kernel: hv_netvsc 002248b9-88cb-0022-48b9-88cb002248b9 eth0: Data path switched to VF: enP61942s1 May 9 23:58:01.237923 systemd-networkd[1507]: enP61942s1: Link UP May 9 23:58:01.238018 systemd-networkd[1507]: eth0: Link UP May 9 23:58:01.238022 systemd-networkd[1507]: eth0: Gained carrier May 9 23:58:01.238037 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:58:01.242040 systemd-networkd[1507]: enP61942s1: Gained carrier May 9 23:58:01.251691 systemd-networkd[1507]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 9 23:58:01.291596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:58:01.489376 augenrules[1651]: No rules May 9 23:58:01.490972 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:58:02.123841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:58:02.131715 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:58:02.655826 systemd-networkd[1507]: enP61942s1: Gained IPv6LL May 9 23:58:03.103795 systemd-networkd[1507]: eth0: Gained IPv6LL May 9 23:58:03.106474 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:58:03.116388 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:58:08.104600 ldconfig[1310]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:58:08.121556 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:58:08.133829 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:58:08.149593 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:58:08.156326 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:58:08.162501 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:58:08.169444 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:58:08.177347 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:58:08.183773 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:58:08.190936 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:58:08.197973 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:58:08.198013 systemd[1]: Reached target paths.target - Path Units. May 9 23:58:08.203076 systemd[1]: Reached target timers.target - Timer Units. May 9 23:58:08.210714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:58:08.218478 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:58:08.227801 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:58:08.234310 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:58:08.241867 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:58:08.247394 systemd[1]: Reached target basic.target - Basic System. May 9 23:58:08.252663 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:58:08.252697 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:58:08.258752 systemd[1]: Starting chronyd.service - NTP client/server... May 9 23:58:08.265817 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:58:08.278892 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 23:58:08.287600 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 9 23:58:08.293840 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:58:08.300794 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:58:08.311528 jq[1669]: false May 9 23:58:08.312393 chronyd[1674]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 9 23:58:08.315991 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:58:08.321789 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:58:08.321835 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 9 23:58:08.323045 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 9 23:58:08.329172 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 9 23:58:08.329941 chronyd[1674]: Timezone right/UTC failed leap second check, ignoring May 9 23:58:08.330162 chronyd[1674]: Loaded seccomp filter (level 2) May 9 23:58:08.332630 KVP[1675]: KVP starting; pid is:1675 May 9 23:58:08.332817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:08.343937 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:58:08.354882 kernel: hv_utils: KVP IC version 4.0 May 9 23:58:08.354698 KVP[1675]: KVP LIC Version: 3.1 May 9 23:58:08.352863 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:58:08.366002 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:58:08.374868 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:58:08.387023 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:58:08.403000 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:58:08.405267 extend-filesystems[1672]: Found loop4 May 9 23:58:08.405267 extend-filesystems[1672]: Found loop5 May 9 23:58:08.405267 extend-filesystems[1672]: Found loop6 May 9 23:58:08.405267 extend-filesystems[1672]: Found loop7 May 9 23:58:08.405267 extend-filesystems[1672]: Found sda May 9 23:58:08.405267 extend-filesystems[1672]: Found sda1 May 9 23:58:08.405267 extend-filesystems[1672]: Found sda2 May 9 23:58:08.405267 extend-filesystems[1672]: Found sda3 May 9 23:58:08.405267 extend-filesystems[1672]: Found usr May 9 23:58:08.405267 extend-filesystems[1672]: Found sda4 May 9 23:58:08.405267 extend-filesystems[1672]: Found sda6 May 9 23:58:08.405267 extend-filesystems[1672]: Found sda7 May 9 23:58:08.405267 extend-filesystems[1672]: Found sda9 May 9 23:58:08.405267 extend-filesystems[1672]: Checking size of /dev/sda9 May 9 23:58:08.714327 coreos-metadata[1667]: May 09 23:58:08.681 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 9 23:58:08.714327 coreos-metadata[1667]: May 09 23:58:08.688 INFO Fetch successful May 9 23:58:08.714327 coreos-metadata[1667]: May 09 23:58:08.688 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 9 23:58:08.714327 coreos-metadata[1667]: May 09 23:58:08.696 INFO Fetch successful May 9 23:58:08.714327 coreos-metadata[1667]: May 09 23:58:08.696 INFO Fetching http://168.63.129.16/machine/ad08b319-af47-48d5-a6ad-762c086b72ab/fa4e4191%2D2e0b%2D4631%2D8a14%2Dbad5a10382ae.%5Fci%2D4081.3.3%2Dn%2D84ab9604c4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 9 23:58:08.714327 coreos-metadata[1667]: May 09 23:58:08.705 INFO Fetch successful May 9 23:58:08.714327 coreos-metadata[1667]: May 09 23:58:08.705 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 9 23:58:08.717428 extend-filesystems[1672]: Old size kept for /dev/sda9 May 9 23:58:08.717428 extend-filesystems[1672]: Found sr0 May 9 23:58:08.751863 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1715) May 9 23:58:08.416851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:58:08.535515 dbus-daemon[1668]: [system] SELinux support is enabled May 9 23:58:08.752378 coreos-metadata[1667]: May 09 23:58:08.727 INFO Fetch successful May 9 23:58:08.417365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:58:08.752818 update_engine[1695]: I20250509 23:58:08.553079 1695 main.cc:92] Flatcar Update Engine starting May 9 23:58:08.752818 update_engine[1695]: I20250509 23:58:08.578820 1695 update_check_scheduler.cc:74] Next update check in 7m42s May 9 23:58:08.423845 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:58:08.753166 jq[1700]: true May 9 23:58:08.449389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:58:08.460855 systemd[1]: Started chronyd.service - NTP client/server. May 9 23:58:08.482169 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:58:08.753516 tar[1719]: linux-arm64/LICENSE May 9 23:58:08.753516 tar[1719]: linux-arm64/helm May 9 23:58:08.482336 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:58:08.482586 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:58:08.754419 jq[1721]: true May 9 23:58:08.482747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:58:08.501250 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:58:08.501426 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:58:08.536120 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:58:08.558766 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:58:08.618071 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:58:08.618272 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:58:08.639483 systemd-logind[1686]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 9 23:58:08.644256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:58:08.644282 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:58:08.644912 systemd-logind[1686]: New seat seat0. May 9 23:58:08.663838 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:58:08.663861 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:58:08.666196 (ntainerd)[1722]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:58:08.688797 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:58:08.699200 systemd[1]: Started update-engine.service - Update Engine. May 9 23:58:08.723924 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:58:08.913007 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 23:58:08.928846 bash[1776]: Updated "/home/core/.ssh/authorized_keys" May 9 23:58:08.928576 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:58:08.930000 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:58:08.943492 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 23:58:09.126283 locksmithd[1743]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:58:09.364513 containerd[1722]: time="2025-05-09T23:58:09.364393780Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 23:58:09.417490 sshd_keygen[1694]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:58:09.433366 containerd[1722]: time="2025-05-09T23:58:09.433300100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.434991300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435035660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435053220Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435234100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435250420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435319940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435332740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435498020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435513540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435527780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:09.435717 containerd[1722]: time="2025-05-09T23:58:09.435540260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:58:09.436010 containerd[1722]: time="2025-05-09T23:58:09.435610980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:09.436258 containerd[1722]: time="2025-05-09T23:58:09.436233260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:09.436464 containerd[1722]: time="2025-05-09T23:58:09.436444340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:09.436523 containerd[1722]: time="2025-05-09T23:58:09.436510060Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:58:09.436736 containerd[1722]: time="2025-05-09T23:58:09.436716140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:58:09.436865 containerd[1722]: time="2025-05-09T23:58:09.436848500Z" level=info msg="metadata content store policy set" policy=shared May 9 23:58:09.453855 containerd[1722]: time="2025-05-09T23:58:09.453772260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:58:09.454758 containerd[1722]: time="2025-05-09T23:58:09.454024140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:58:09.454758 containerd[1722]: time="2025-05-09T23:58:09.454057740Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:58:09.454758 containerd[1722]: time="2025-05-09T23:58:09.454258300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:58:09.454758 containerd[1722]: time="2025-05-09T23:58:09.454283540Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:58:09.454758 containerd[1722]: time="2025-05-09T23:58:09.454473540Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455732940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455903380Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455919860Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455934260Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455950940Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455965180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455978460Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.455993540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.456007780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.456020580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.456033900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.456046340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.456076100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456521 containerd[1722]: time="2025-05-09T23:58:09.456091140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456104060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456117820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456135780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456150780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456166580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456180380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456194060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456209780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456221460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456233260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456245660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456269900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456291420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456303660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:58:09.456871 containerd[1722]: time="2025-05-09T23:58:09.456315420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:58:09.457701 containerd[1722]: time="2025-05-09T23:58:09.457668380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:58:09.457816 containerd[1722]: time="2025-05-09T23:58:09.457797220Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:58:09.457876 containerd[1722]: time="2025-05-09T23:58:09.457862540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:58:09.458412 containerd[1722]: time="2025-05-09T23:58:09.457916260Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:58:09.458412 containerd[1722]: time="2025-05-09T23:58:09.457931220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:58:09.458412 containerd[1722]: time="2025-05-09T23:58:09.457947260Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:58:09.458412 containerd[1722]: time="2025-05-09T23:58:09.457958980Z" level=info msg="NRI interface is disabled by configuration." May 9 23:58:09.458412 containerd[1722]: time="2025-05-09T23:58:09.457969340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:58:09.458579 containerd[1722]: time="2025-05-09T23:58:09.458296100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:58:09.458579 containerd[1722]: time="2025-05-09T23:58:09.458360460Z" level=info msg="Connect containerd service" May 9 23:58:09.459303 containerd[1722]: time="2025-05-09T23:58:09.458844900Z" level=info msg="using legacy CRI server" May 9 23:58:09.459303 containerd[1722]: time="2025-05-09T23:58:09.458866660Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:58:09.459303 containerd[1722]: time="2025-05-09T23:58:09.458988780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:58:09.460420 containerd[1722]: time="2025-05-09T23:58:09.460387420Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:58:09.460699 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:58:09.461675 containerd[1722]: time="2025-05-09T23:58:09.461529260Z" level=info msg="Start subscribing containerd event" May 9 23:58:09.461675 containerd[1722]: time="2025-05-09T23:58:09.461590860Z" level=info msg="Start recovering state" May 9 23:58:09.461802 containerd[1722]: time="2025-05-09T23:58:09.461788460Z" level=info msg="Start event monitor" May 9 23:58:09.462115 containerd[1722]: time="2025-05-09T23:58:09.461949060Z" level=info msg="Start snapshots syncer" May 9 23:58:09.462115 containerd[1722]: time="2025-05-09T23:58:09.461969700Z" level=info msg="Start cni network conf syncer for default" May 9 23:58:09.462115 containerd[1722]: time="2025-05-09T23:58:09.461982700Z" level=info msg="Start streaming server" May 9 23:58:09.462673 containerd[1722]: time="2025-05-09T23:58:09.462601220Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:58:09.462847 containerd[1722]: time="2025-05-09T23:58:09.462761340Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:58:09.462847 containerd[1722]: time="2025-05-09T23:58:09.462824940Z" level=info msg="containerd successfully booted in 0.103569s" May 9 23:58:09.467323 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:58:09.484034 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:58:09.488075 tar[1719]: linux-arm64/README.md May 9 23:58:09.492935 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 9 23:58:09.508133 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:58:09.508299 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:58:09.519414 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:58:09.537576 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 9 23:58:09.552811 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:58:09.574892 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:58:09.588180 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:58:09.597394 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 23:58:09.604954 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:58:09.715615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:09.723252 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:58:09.724245 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:09.730835 systemd[1]: Startup finished in 736ms (kernel) + 12.298s (initrd) + 19.606s (userspace) = 32.642s. May 9 23:58:10.044467 login[1824]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:10.046782 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:10.055787 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:58:10.063327 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:58:10.067337 systemd-logind[1686]: New session 2 of user core. May 9 23:58:10.077961 systemd-logind[1686]: New session 1 of user core. May 9 23:58:10.085029 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:58:10.095004 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:58:10.112549 (systemd)[1843]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:58:10.183952 kubelet[1831]: E0509 23:58:10.183880 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:10.185680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:10.185819 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:10.265069 systemd[1843]: Queued start job for default target default.target. May 9 23:58:10.272685 systemd[1843]: Created slice app.slice - User Application Slice. May 9 23:58:10.272712 systemd[1843]: Reached target paths.target - Paths. May 9 23:58:10.272725 systemd[1843]: Reached target timers.target - Timers. May 9 23:58:10.274270 systemd[1843]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:58:10.286511 systemd[1843]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:58:10.286668 systemd[1843]: Reached target sockets.target - Sockets. May 9 23:58:10.286684 systemd[1843]: Reached target basic.target - Basic System. May 9 23:58:10.286733 systemd[1843]: Reached target default.target - Main User Target. May 9 23:58:10.286761 systemd[1843]: Startup finished in 166ms. May 9 23:58:10.286914 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:58:10.288353 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:58:10.290053 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:58:11.253675 waagent[1820]: 2025-05-09T23:58:11.251013Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 9 23:58:11.257323 waagent[1820]: 2025-05-09T23:58:11.257239Z INFO Daemon Daemon OS: flatcar 4081.3.3 May 9 23:58:11.262009 waagent[1820]: 2025-05-09T23:58:11.261942Z INFO Daemon Daemon Python: 3.11.9 May 9 23:58:11.269012 waagent[1820]: 2025-05-09T23:58:11.268761Z INFO Daemon Daemon Run daemon May 9 23:58:11.274105 waagent[1820]: 2025-05-09T23:58:11.274036Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' May 9 23:58:11.283708 waagent[1820]: 2025-05-09T23:58:11.283620Z INFO Daemon Daemon Using waagent for provisioning May 9 23:58:11.289257 waagent[1820]: 2025-05-09T23:58:11.289202Z INFO Daemon Daemon Activate resource disk May 9 23:58:11.294225 waagent[1820]: 2025-05-09T23:58:11.294158Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 9 23:58:11.306160 waagent[1820]: 2025-05-09T23:58:11.306085Z INFO Daemon Daemon Found device: None May 9 23:58:11.311332 waagent[1820]: 2025-05-09T23:58:11.311267Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 9 23:58:11.320539 waagent[1820]: 2025-05-09T23:58:11.320474Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 9 23:58:11.335056 waagent[1820]: 2025-05-09T23:58:11.334982Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 9 23:58:11.342269 waagent[1820]: 2025-05-09T23:58:11.342189Z INFO Daemon Daemon Running default provisioning handler May 9 23:58:11.356665 waagent[1820]: 2025-05-09T23:58:11.354763Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 9 23:58:11.369764 waagent[1820]: 2025-05-09T23:58:11.369690Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 9 23:58:11.380411 waagent[1820]: 2025-05-09T23:58:11.380342Z INFO Daemon Daemon cloud-init is enabled: False May 9 23:58:11.388009 waagent[1820]: 2025-05-09T23:58:11.387946Z INFO Daemon Daemon Copying ovf-env.xml May 9 23:58:11.480663 waagent[1820]: 2025-05-09T23:58:11.478675Z INFO Daemon Daemon Successfully mounted dvd May 9 23:58:11.507628 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 9 23:58:11.509477 waagent[1820]: 2025-05-09T23:58:11.509386Z INFO Daemon Daemon Detect protocol endpoint May 9 23:58:11.514629 waagent[1820]: 2025-05-09T23:58:11.514553Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 9 23:58:11.521282 waagent[1820]: 2025-05-09T23:58:11.521214Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 9 23:58:11.528454 waagent[1820]: 2025-05-09T23:58:11.528390Z INFO Daemon Daemon Test for route to 168.63.129.16 May 9 23:58:11.534475 waagent[1820]: 2025-05-09T23:58:11.534415Z INFO Daemon Daemon Route to 168.63.129.16 exists May 9 23:58:11.540128 waagent[1820]: 2025-05-09T23:58:11.540064Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 9 23:58:11.573589 waagent[1820]: 2025-05-09T23:58:11.573515Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 9 23:58:11.581131 waagent[1820]: 2025-05-09T23:58:11.581084Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 9 23:58:11.587460 waagent[1820]: 2025-05-09T23:58:11.587382Z INFO Daemon Daemon Server preferred version:2015-04-05 May 9 23:58:11.859857 waagent[1820]: 2025-05-09T23:58:11.859705Z INFO Daemon Daemon Initializing goal state during protocol detection May 9 23:58:11.867663 waagent[1820]: 2025-05-09T23:58:11.867110Z INFO Daemon Daemon Forcing an update of the goal state. May 9 23:58:11.877095 waagent[1820]: 2025-05-09T23:58:11.877036Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 9 23:58:11.900038 waagent[1820]: 2025-05-09T23:58:11.899986Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 9 23:58:11.906699 waagent[1820]: 2025-05-09T23:58:11.906614Z INFO Daemon May 9 23:58:11.909749 waagent[1820]: 2025-05-09T23:58:11.909694Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 48ba0558-ba9a-4eb6-9789-6f0ef1d4a4bf eTag: 7425516803148942196 source: Fabric] May 9 23:58:11.922089 waagent[1820]: 2025-05-09T23:58:11.922036Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 9 23:58:11.929936 waagent[1820]: 2025-05-09T23:58:11.929886Z INFO Daemon May 9 23:58:11.932848 waagent[1820]: 2025-05-09T23:58:11.932797Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 9 23:58:11.946959 waagent[1820]: 2025-05-09T23:58:11.946917Z INFO Daemon Daemon Downloading artifacts profile blob May 9 23:58:12.045784 waagent[1820]: 2025-05-09T23:58:12.045688Z INFO Daemon Downloaded certificate {'thumbprint': 'E9C3C6192E8F6B70D20A58E2DFDE0F5D69495FD3', 'hasPrivateKey': True} May 9 23:58:12.058104 waagent[1820]: 2025-05-09T23:58:12.058047Z INFO Daemon Downloaded certificate {'thumbprint': '1E5A6A7227C2D711DC461D5958B1201B5910F988', 'hasPrivateKey': False} May 9 23:58:12.069126 waagent[1820]: 2025-05-09T23:58:12.069069Z INFO Daemon Fetch goal state completed May 9 23:58:12.080998 waagent[1820]: 2025-05-09T23:58:12.080948Z INFO Daemon Daemon Starting provisioning May 9 23:58:12.086333 waagent[1820]: 2025-05-09T23:58:12.086261Z INFO Daemon Daemon Handle ovf-env.xml. May 9 23:58:12.091221 waagent[1820]: 2025-05-09T23:58:12.091158Z INFO Daemon Daemon Set hostname [ci-4081.3.3-n-84ab9604c4] May 9 23:58:12.116663 waagent[1820]: 2025-05-09T23:58:12.111786Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-n-84ab9604c4] May 9 23:58:12.119497 waagent[1820]: 2025-05-09T23:58:12.119422Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 9 23:58:12.126518 waagent[1820]: 2025-05-09T23:58:12.126447Z INFO Daemon Daemon Primary interface is [eth0] May 9 23:58:12.172821 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:58:12.172833 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:58:12.173814 waagent[1820]: 2025-05-09T23:58:12.173473Z INFO Daemon Daemon Create user account if not exists May 9 23:58:12.172925 systemd-networkd[1507]: eth0: DHCP lease lost May 9 23:58:12.180008 waagent[1820]: 2025-05-09T23:58:12.179914Z INFO Daemon Daemon User core already exists, skip useradd May 9 23:58:12.186230 waagent[1820]: 2025-05-09T23:58:12.186157Z INFO Daemon Daemon Configure sudoer May 9 23:58:12.191773 waagent[1820]: 2025-05-09T23:58:12.191691Z INFO Daemon Daemon Configure sshd May 9 23:58:12.196627 waagent[1820]: 2025-05-09T23:58:12.196553Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 9 23:58:12.209910 systemd-networkd[1507]: eth0: DHCPv6 lease lost May 9 23:58:12.210620 waagent[1820]: 2025-05-09T23:58:12.210527Z INFO Daemon Daemon Deploy ssh public key. May 9 23:58:12.225731 systemd-networkd[1507]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 9 23:58:13.323315 waagent[1820]: 2025-05-09T23:58:13.323227Z INFO Daemon Daemon Provisioning complete May 9 23:58:13.342366 waagent[1820]: 2025-05-09T23:58:13.342291Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 9 23:58:13.351345 waagent[1820]: 2025-05-09T23:58:13.351229Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 9 23:58:13.362346 waagent[1820]: 2025-05-09T23:58:13.362274Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 9 23:58:13.509430 waagent[1900]: 2025-05-09T23:58:13.509335Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 9 23:58:13.509802 waagent[1900]: 2025-05-09T23:58:13.509513Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 May 9 23:58:13.509802 waagent[1900]: 2025-05-09T23:58:13.509566Z INFO ExtHandler ExtHandler Python: 3.11.9 May 9 23:58:13.932682 waagent[1900]: 2025-05-09T23:58:13.932562Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 9 23:58:13.932884 waagent[1900]: 2025-05-09T23:58:13.932843Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 9 23:58:13.932950 waagent[1900]: 2025-05-09T23:58:13.932919Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 9 23:58:13.941468 waagent[1900]: 2025-05-09T23:58:13.941371Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 9 23:58:13.947521 waagent[1900]: 2025-05-09T23:58:13.947466Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 9 23:58:13.948122 waagent[1900]: 2025-05-09T23:58:13.948071Z INFO ExtHandler May 9 23:58:13.948201 waagent[1900]: 2025-05-09T23:58:13.948169Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 18747de1-0886-42ca-9ad0-f172a1b4676b eTag: 7425516803148942196 source: Fabric] May 9 23:58:13.948522 waagent[1900]: 2025-05-09T23:58:13.948478Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 9 23:58:13.955787 waagent[1900]: 2025-05-09T23:58:13.955699Z INFO ExtHandler May 9 23:58:13.955881 waagent[1900]: 2025-05-09T23:58:13.955847Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 9 23:58:13.960179 waagent[1900]: 2025-05-09T23:58:13.960135Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 9 23:58:14.184785 waagent[1900]: 2025-05-09T23:58:14.184600Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E9C3C6192E8F6B70D20A58E2DFDE0F5D69495FD3', 'hasPrivateKey': True} May 9 23:58:14.185249 waagent[1900]: 2025-05-09T23:58:14.185196Z INFO ExtHandler Downloaded certificate {'thumbprint': '1E5A6A7227C2D711DC461D5958B1201B5910F988', 'hasPrivateKey': False} May 9 23:58:14.185696 waagent[1900]: 2025-05-09T23:58:14.185630Z INFO ExtHandler Fetch goal state completed May 9 23:58:14.201209 waagent[1900]: 2025-05-09T23:58:14.201143Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1900 May 9 23:58:14.201384 waagent[1900]: 2025-05-09T23:58:14.201349Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 9 23:58:14.203094 waagent[1900]: 2025-05-09T23:58:14.203047Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] May 9 23:58:14.203490 waagent[1900]: 2025-05-09T23:58:14.203450Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 9 23:58:14.341557 waagent[1900]: 2025-05-09T23:58:14.341508Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 9 23:58:14.341806 waagent[1900]: 2025-05-09T23:58:14.341763Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 9 23:58:14.348624 waagent[1900]: 2025-05-09T23:58:14.348093Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 9 23:58:14.355084 systemd[1]: Reloading requested from client PID 1915 ('systemctl') (unit waagent.service)... May 9 23:58:14.355332 systemd[1]: Reloading... May 9 23:58:14.445661 zram_generator::config[1952]: No configuration found. May 9 23:58:14.556778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:14.643264 systemd[1]: Reloading finished in 287 ms. May 9 23:58:14.669583 waagent[1900]: 2025-05-09T23:58:14.669181Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 9 23:58:14.677349 systemd[1]: Reloading requested from client PID 2006 ('systemctl') (unit waagent.service)... May 9 23:58:14.677370 systemd[1]: Reloading... May 9 23:58:14.762683 zram_generator::config[2040]: No configuration found. May 9 23:58:14.872949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:14.948243 systemd[1]: Reloading finished in 270 ms. May 9 23:58:14.971693 waagent[1900]: 2025-05-09T23:58:14.970835Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 9 23:58:14.971693 waagent[1900]: 2025-05-09T23:58:14.971014Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 9 23:58:15.346789 waagent[1900]: 2025-05-09T23:58:15.345446Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 9 23:58:15.346789 waagent[1900]: 2025-05-09T23:58:15.346127Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 9 23:58:15.347038 waagent[1900]: 2025-05-09T23:58:15.346980Z INFO ExtHandler ExtHandler Starting env monitor service. May 9 23:58:15.347166 waagent[1900]: 2025-05-09T23:58:15.347118Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 9 23:58:15.347650 waagent[1900]: 2025-05-09T23:58:15.347580Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 9 23:58:15.347702 waagent[1900]: 2025-05-09T23:58:15.347669Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 9 23:58:15.348027 waagent[1900]: 2025-05-09T23:58:15.347970Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 9 23:58:15.348405 waagent[1900]: 2025-05-09T23:58:15.348345Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 9 23:58:15.348604 waagent[1900]: 2025-05-09T23:58:15.348540Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 9 23:58:15.349065 waagent[1900]: 2025-05-09T23:58:15.349007Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 9 23:58:15.349065 waagent[1900]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 9 23:58:15.349065 waagent[1900]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 9 23:58:15.349065 waagent[1900]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 9 23:58:15.349065 waagent[1900]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 9 23:58:15.349065 waagent[1900]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 9 23:58:15.349065 waagent[1900]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 9 23:58:15.349499 waagent[1900]: 2025-05-09T23:58:15.349441Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 9 23:58:15.349782 waagent[1900]: 2025-05-09T23:58:15.349654Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 9 23:58:15.349782 waagent[1900]: 2025-05-09T23:58:15.349565Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 9 23:58:15.349782 waagent[1900]: 2025-05-09T23:58:15.349712Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 9 23:58:15.349881 waagent[1900]: 2025-05-09T23:58:15.349827Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 9 23:58:15.350052 waagent[1900]: 2025-05-09T23:58:15.349973Z INFO EnvHandler ExtHandler Configure routes May 9 23:58:15.350333 waagent[1900]: 2025-05-09T23:58:15.350285Z INFO EnvHandler ExtHandler Gateway:None May 9 23:58:15.350379 waagent[1900]: 2025-05-09T23:58:15.350360Z INFO EnvHandler ExtHandler Routes:None May 9 23:58:15.377911 waagent[1900]: 2025-05-09T23:58:15.377717Z INFO ExtHandler ExtHandler May 9 23:58:15.379665 waagent[1900]: 2025-05-09T23:58:15.378122Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1d8f74ae-cb3f-4fd3-ab18-adaa2a052b88 correlation 73afcb41-de8e-44c6-94b8-bc411957b448 created: 2025-05-09T23:56:54.206146Z] May 9 23:58:15.379665 waagent[1900]: 2025-05-09T23:58:15.378534Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 9 23:58:15.379665 waagent[1900]: 2025-05-09T23:58:15.379142Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 9 23:58:15.400862 waagent[1900]: 2025-05-09T23:58:15.400776Z INFO MonitorHandler ExtHandler Network interfaces: May 9 23:58:15.400862 waagent[1900]: Executing ['ip', '-a', '-o', 'link']: May 9 23:58:15.400862 waagent[1900]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 9 23:58:15.400862 waagent[1900]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:88:cb brd ff:ff:ff:ff:ff:ff May 9 23:58:15.400862 waagent[1900]: 3: enP61942s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:88:cb brd ff:ff:ff:ff:ff:ff\ altname enP61942p0s2 May 9 23:58:15.400862 waagent[1900]: Executing ['ip', '-4', '-a', '-o', 'address']: May 9 23:58:15.400862 waagent[1900]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 9 23:58:15.400862 waagent[1900]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 9 23:58:15.400862 waagent[1900]: Executing ['ip', '-6', '-a', '-o', 'address']: May 9 23:58:15.400862 waagent[1900]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 9 23:58:15.400862 waagent[1900]: 2: eth0 inet6 fe80::222:48ff:feb9:88cb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 9 23:58:15.400862 waagent[1900]: 3: enP61942s1 inet6 fe80::222:48ff:feb9:88cb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 9 23:58:15.422315 waagent[1900]: 2025-05-09T23:58:15.422249Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 967A16F3-6E97-4CE7-81DA-949A1D8AC6F5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 9 23:58:15.465584 waagent[1900]: 2025-05-09T23:58:15.464627Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 9 23:58:15.465584 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 9 23:58:15.465584 waagent[1900]: pkts bytes target prot opt in out source destination May 9 23:58:15.465584 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 9 23:58:15.465584 waagent[1900]: pkts bytes target prot opt in out source destination May 9 23:58:15.465584 waagent[1900]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 9 23:58:15.465584 waagent[1900]: pkts bytes target prot opt in out source destination May 9 23:58:15.465584 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 9 23:58:15.465584 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 9 23:58:15.465584 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 9 23:58:15.467958 waagent[1900]: 2025-05-09T23:58:15.467901Z INFO EnvHandler ExtHandler Current Firewall rules: May 9 23:58:15.467958 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 9 23:58:15.467958 waagent[1900]: pkts bytes target prot opt in out source destination May 9 23:58:15.467958 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 9 23:58:15.467958 waagent[1900]: pkts bytes target prot opt in out source destination May 9 23:58:15.467958 waagent[1900]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 9 23:58:15.467958 waagent[1900]: pkts bytes target prot opt in out source destination May 9 23:58:15.467958 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 9 23:58:15.467958 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 9 23:58:15.467958 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 9 23:58:15.468503 waagent[1900]: 2025-05-09T23:58:15.468469Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 9 23:58:20.214863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:58:20.223918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:20.333270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:20.337969 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:20.440577 kubelet[2133]: E0509 23:58:20.440521 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:20.443944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:20.444216 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:30.464995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 23:58:30.470859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:30.583671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:30.588447 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:30.699141 kubelet[2148]: E0509 23:58:30.699093 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:30.701946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:30.702228 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:32.126244 chronyd[1674]: Selected source PHC0 May 9 23:58:40.714982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 23:58:40.726830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:40.832450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:40.840921 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:40.897500 kubelet[2163]: E0509 23:58:40.897451 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:40.900229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:40.900368 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:41.197046 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:58:41.198593 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:45090.service - OpenSSH per-connection server daemon (10.200.16.10:45090). May 9 23:58:41.710273 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 45090 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 9 23:58:41.711723 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:41.715857 systemd-logind[1686]: New session 3 of user core. May 9 23:58:41.723815 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:58:42.118550 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:45094.service - OpenSSH per-connection server daemon (10.200.16.10:45094). May 9 23:58:42.562960 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 45094 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 9 23:58:42.564419 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:42.568913 systemd-logind[1686]: New session 4 of user core. May 9 23:58:42.573820 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:58:42.887034 sshd[2176]: pam_unix(sshd:session): session closed for user core May 9 23:58:42.891557 systemd-logind[1686]: Session 4 logged out. Waiting for processes to exit. May 9 23:58:42.892348 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:45094.service: Deactivated successfully. May 9 23:58:42.894413 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:58:42.896587 systemd-logind[1686]: Removed session 4. May 9 23:58:42.971965 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:45108.service - OpenSSH per-connection server daemon (10.200.16.10:45108). May 9 23:58:43.393514 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 45108 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 9 23:58:43.394969 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:43.399873 systemd-logind[1686]: New session 5 of user core. May 9 23:58:43.408853 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:58:43.691341 sshd[2183]: pam_unix(sshd:session): session closed for user core May 9 23:58:43.695471 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:45108.service: Deactivated successfully. May 9 23:58:43.697343 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:58:43.699290 systemd-logind[1686]: Session 5 logged out. Waiting for processes to exit. May 9 23:58:43.700403 systemd-logind[1686]: Removed session 5. May 9 23:58:43.784935 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:45116.service - OpenSSH per-connection server daemon (10.200.16.10:45116). May 9 23:58:44.225169 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 45116 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 9 23:58:44.226552 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:44.230469 systemd-logind[1686]: New session 6 of user core. May 9 23:58:44.239811 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:58:44.547327 sshd[2190]: pam_unix(sshd:session): session closed for user core May 9 23:58:44.551971 systemd-logind[1686]: Session 6 logged out. Waiting for processes to exit. May 9 23:58:44.552615 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:45116.service: Deactivated successfully. May 9 23:58:44.554900 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:58:44.557333 systemd-logind[1686]: Removed session 6. May 9 23:58:44.631482 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:45120.service - OpenSSH per-connection server daemon (10.200.16.10:45120). May 9 23:58:45.079104 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 45120 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 9 23:58:45.080467 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:45.084566 systemd-logind[1686]: New session 7 of user core. May 9 23:58:45.092811 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:58:45.456674 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:58:45.456990 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:45.487681 sudo[2200]: pam_unix(sudo:session): session closed for user root May 9 23:58:45.563918 sshd[2197]: pam_unix(sshd:session): session closed for user core May 9 23:58:45.568212 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:45120.service: Deactivated successfully. May 9 23:58:45.570051 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:58:45.570863 systemd-logind[1686]: Session 7 logged out. Waiting for processes to exit. May 9 23:58:45.572314 systemd-logind[1686]: Removed session 7. May 9 23:58:45.659851 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:45124.service - OpenSSH per-connection server daemon (10.200.16.10:45124). May 9 23:58:46.113327 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 45124 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 9 23:58:46.115824 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:46.119687 systemd-logind[1686]: New session 8 of user core. May 9 23:58:46.125866 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 23:58:46.370016 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:58:46.370319 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:46.373863 sudo[2209]: pam_unix(sudo:session): session closed for user root May 9 23:58:46.379138 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 23:58:46.379414 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:46.397890 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 23:58:46.399931 auditctl[2212]: No rules May 9 23:58:46.400258 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:58:46.400441 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 23:58:46.403538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:58:46.428263 augenrules[2230]: No rules May 9 23:58:46.429847 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:58:46.431327 sudo[2208]: pam_unix(sudo:session): session closed for user root May 9 23:58:46.507076 sshd[2205]: pam_unix(sshd:session): session closed for user core May 9 23:58:46.510843 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:45124.service: Deactivated successfully. May 9 23:58:46.512394 systemd[1]: session-8.scope: Deactivated successfully. May 9 23:58:46.513069 systemd-logind[1686]: Session 8 logged out. Waiting for processes to exit. May 9 23:58:46.513920 systemd-logind[1686]: Removed session 8. May 9 23:58:46.590328 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:45138.service - OpenSSH per-connection server daemon (10.200.16.10:45138). May 9 23:58:47.032930 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 45138 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 9 23:58:47.034301 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:47.038108 systemd-logind[1686]: New session 9 of user core. May 9 23:58:47.045849 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 23:58:47.285383 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:58:47.285701 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:47.511862 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 9 23:58:48.135919 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:58:48.136083 (dockerd)[2256]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:58:48.666655 dockerd[2256]: time="2025-05-09T23:58:48.664928501Z" level=info msg="Starting up" May 9 23:58:49.037327 dockerd[2256]: time="2025-05-09T23:58:49.037275669Z" level=info msg="Loading containers: start." May 9 23:58:49.206692 kernel: Initializing XFRM netlink socket May 9 23:58:49.349092 systemd-networkd[1507]: docker0: Link UP May 9 23:58:49.382066 dockerd[2256]: time="2025-05-09T23:58:49.382014657Z" level=info msg="Loading containers: done." May 9 23:58:49.403895 dockerd[2256]: time="2025-05-09T23:58:49.403841921Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:58:49.404186 dockerd[2256]: time="2025-05-09T23:58:49.404166561Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 23:58:49.404398 dockerd[2256]: time="2025-05-09T23:58:49.404378201Z" level=info msg="Daemon has completed initialization" May 9 23:58:49.477909 dockerd[2256]: time="2025-05-09T23:58:49.477838107Z" level=info msg="API listen on /run/docker.sock" May 9 23:58:49.478590 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:58:50.560570 containerd[1722]: time="2025-05-09T23:58:50.560510796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 9 23:58:50.964976 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 9 23:58:50.974070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:51.121883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:51.126533 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:51.162936 kubelet[2400]: E0509 23:58:51.162863 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:51.165242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:51.165403 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:52.088862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756438537.mount: Deactivated successfully. May 9 23:58:53.371674 containerd[1722]: time="2025-05-09T23:58:53.371487181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:53.375621 containerd[1722]: time="2025-05-09T23:58:53.375412978Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 9 23:58:53.380415 containerd[1722]: time="2025-05-09T23:58:53.380368375Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:53.388038 containerd[1722]: time="2025-05-09T23:58:53.387979729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:53.389300 containerd[1722]: time="2025-05-09T23:58:53.389020808Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.828469812s" May 9 23:58:53.389300 containerd[1722]: time="2025-05-09T23:58:53.389057528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 9 23:58:53.389807 containerd[1722]: time="2025-05-09T23:58:53.389778208Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 9 23:58:54.090303 update_engine[1695]: I20250509 23:58:54.089143 1695 update_attempter.cc:509] Updating boot flags... May 9 23:58:54.152396 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2472) May 9 23:58:54.578478 containerd[1722]: time="2025-05-09T23:58:54.578429923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:54.583484 containerd[1722]: time="2025-05-09T23:58:54.583444442Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 9 23:58:54.588921 containerd[1722]: time="2025-05-09T23:58:54.588889841Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:54.594996 containerd[1722]: time="2025-05-09T23:58:54.594950880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:54.596060 containerd[1722]: time="2025-05-09T23:58:54.596025840Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.206131672s" May 9 23:58:54.596193 containerd[1722]: time="2025-05-09T23:58:54.596176880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 9 23:58:54.596833 containerd[1722]: time="2025-05-09T23:58:54.596799080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 9 23:58:55.700740 containerd[1722]: time="2025-05-09T23:58:55.700692933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:55.704167 containerd[1722]: time="2025-05-09T23:58:55.704125013Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 9 23:58:55.711195 containerd[1722]: time="2025-05-09T23:58:55.710753172Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:55.718272 containerd[1722]: time="2025-05-09T23:58:55.718203770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:55.719457 containerd[1722]: time="2025-05-09T23:58:55.719325610Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.12239621s" May 9 23:58:55.719457 containerd[1722]: time="2025-05-09T23:58:55.719363930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 9 23:58:55.720509 containerd[1722]: time="2025-05-09T23:58:55.720472570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 23:58:57.407822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394586328.mount: Deactivated successfully. May 9 23:58:57.815774 containerd[1722]: time="2025-05-09T23:58:57.815719831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:57.818659 containerd[1722]: time="2025-05-09T23:58:57.818611391Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 9 23:58:57.822590 containerd[1722]: time="2025-05-09T23:58:57.822536110Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:57.827200 containerd[1722]: time="2025-05-09T23:58:57.827119389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:57.828266 containerd[1722]: time="2025-05-09T23:58:57.827771389Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 2.107262059s" May 9 23:58:57.828266 containerd[1722]: time="2025-05-09T23:58:57.827812029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 9 23:58:57.828371 containerd[1722]: time="2025-05-09T23:58:57.828301469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 9 23:58:58.596601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774959720.mount: Deactivated successfully. May 9 23:59:00.348623 containerd[1722]: time="2025-05-09T23:59:00.348397064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:00.352031 containerd[1722]: time="2025-05-09T23:59:00.351808423Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 9 23:59:00.355233 containerd[1722]: time="2025-05-09T23:59:00.355189583Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:00.362264 containerd[1722]: time="2025-05-09T23:59:00.362199661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:00.363437 containerd[1722]: time="2025-05-09T23:59:00.363313261Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.534981752s" May 9 23:59:00.363437 containerd[1722]: time="2025-05-09T23:59:00.363346661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 9 23:59:00.364197 containerd[1722]: time="2025-05-09T23:59:00.363927981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 23:59:00.996231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011545662.mount: Deactivated successfully. May 9 23:59:01.214844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 9 23:59:01.222836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:01.932498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:01.942924 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:59:02.010272 kubelet[2580]: E0509 23:59:02.010226 2580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:59:02.013073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:59:02.013478 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:59:02.425722 containerd[1722]: time="2025-05-09T23:59:02.425673952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:02.430507 containerd[1722]: time="2025-05-09T23:59:02.430336711Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 9 23:59:02.436627 containerd[1722]: time="2025-05-09T23:59:02.436596349Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:02.446447 containerd[1722]: time="2025-05-09T23:59:02.446384907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:02.447257 containerd[1722]: time="2025-05-09T23:59:02.447119587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 2.083162486s" May 9 23:59:02.447257 containerd[1722]: time="2025-05-09T23:59:02.447156427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 23:59:02.447924 containerd[1722]: time="2025-05-09T23:59:02.447748227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 9 23:59:03.153451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554780565.mount: Deactivated successfully. May 9 23:59:04.995782 containerd[1722]: time="2025-05-09T23:59:04.995728132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:05.000032 containerd[1722]: time="2025-05-09T23:59:04.999994211Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 9 23:59:05.004263 containerd[1722]: time="2025-05-09T23:59:05.004213889Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:05.011124 containerd[1722]: time="2025-05-09T23:59:05.011045567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:05.012509 containerd[1722]: time="2025-05-09T23:59:05.012366406Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.564587219s" May 9 23:59:05.012509 containerd[1722]: time="2025-05-09T23:59:05.012404726Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 9 23:59:10.903924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:10.916936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:10.950360 systemd[1]: Reloading requested from client PID 2668 ('systemctl') (unit session-9.scope)... May 9 23:59:10.950385 systemd[1]: Reloading... May 9 23:59:11.060018 zram_generator::config[2709]: No configuration found. May 9 23:59:11.180560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:11.259139 systemd[1]: Reloading finished in 308 ms. May 9 23:59:11.304677 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:59:11.304758 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:59:11.305072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:11.312039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:11.419664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:11.434004 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:59:11.472890 kubelet[2776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:59:11.472890 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:59:11.472890 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:59:11.473285 kubelet[2776]: I0509 23:59:11.472941 2776 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:59:12.088468 kubelet[2776]: I0509 23:59:12.088412 2776 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:59:12.089676 kubelet[2776]: I0509 23:59:12.088756 2776 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:59:12.089676 kubelet[2776]: I0509 23:59:12.089201 2776 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:59:12.113790 kubelet[2776]: E0509 23:59:12.113690 2776 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:12.118191 kubelet[2776]: I0509 23:59:12.117983 2776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:59:12.129921 kubelet[2776]: E0509 23:59:12.129614 2776 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:59:12.129921 kubelet[2776]: I0509 23:59:12.129907 2776 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:59:12.140663 kubelet[2776]: I0509 23:59:12.140555 2776 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:59:12.142044 kubelet[2776]: I0509 23:59:12.141582 2776 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:59:12.142044 kubelet[2776]: I0509 23:59:12.141624 2776 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-84ab9604c4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:59:12.142044 kubelet[2776]: I0509 23:59:12.141820 2776 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:59:12.142044 kubelet[2776]: I0509 23:59:12.141829 2776 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:59:12.142270 kubelet[2776]: I0509 23:59:12.141977 2776 state_mem.go:36] "Initialized new in-memory state store" May 9 23:59:12.145102 kubelet[2776]: I0509 23:59:12.145075 2776 kubelet.go:446] "Attempting to sync node with API server" May 9 23:59:12.145146 kubelet[2776]: I0509 23:59:12.145107 2776 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:59:12.145240 kubelet[2776]: I0509 23:59:12.145222 2776 kubelet.go:352] "Adding apiserver pod source" May 9 23:59:12.145266 kubelet[2776]: I0509 23:59:12.145243 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:59:12.146619 kubelet[2776]: W0509 23:59:12.146571 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-84ab9604c4&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:12.146786 kubelet[2776]: E0509 23:59:12.146756 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-84ab9604c4&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:12.149987 kubelet[2776]: W0509 23:59:12.149830 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:12.149987 kubelet[2776]: E0509 23:59:12.149884 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:12.150100 kubelet[2776]: I0509 23:59:12.150051 2776 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:59:12.151071 kubelet[2776]: I0509 23:59:12.150498 2776 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:59:12.151071 kubelet[2776]: W0509 23:59:12.150568 2776 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:59:12.152316 kubelet[2776]: I0509 23:59:12.152139 2776 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:59:12.152316 kubelet[2776]: I0509 23:59:12.152173 2776 server.go:1287] "Started kubelet" May 9 23:59:12.155678 kubelet[2776]: E0509 23:59:12.155160 2776 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-84ab9604c4.183e0150d531d264 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-84ab9604c4,UID:ci-4081.3.3-n-84ab9604c4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-84ab9604c4,},FirstTimestamp:2025-05-09 23:59:12.152154724 +0000 UTC m=+0.715013706,LastTimestamp:2025-05-09 23:59:12.152154724 +0000 UTC m=+0.715013706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-84ab9604c4,}" May 9 23:59:12.155678 kubelet[2776]: I0509 23:59:12.155358 2776 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:59:12.156259 kubelet[2776]: I0509 23:59:12.156227 2776 server.go:490] "Adding debug handlers to kubelet server" May 9 23:59:12.156813 kubelet[2776]: I0509 23:59:12.156759 2776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:59:12.157156 kubelet[2776]: I0509 23:59:12.157140 2776 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:59:12.159432 kubelet[2776]: I0509 23:59:12.159340 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:59:12.161295 kubelet[2776]: I0509 23:59:12.159874 2776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:59:12.162275 kubelet[2776]: E0509 23:59:12.162253 2776 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:59:12.162754 kubelet[2776]: E0509 23:59:12.162724 2776 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" May 9 23:59:12.162892 kubelet[2776]: I0509 23:59:12.162881 2776 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:59:12.163182 kubelet[2776]: I0509 23:59:12.163165 2776 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:59:12.163302 kubelet[2776]: I0509 23:59:12.163293 2776 reconciler.go:26] "Reconciler: start to sync state" May 9 23:59:12.164071 kubelet[2776]: W0509 23:59:12.164033 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:12.164195 kubelet[2776]: E0509 23:59:12.164177 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:12.164434 kubelet[2776]: I0509 23:59:12.164419 2776 factory.go:221] Registration of the systemd container factory successfully May 9 23:59:12.164591 kubelet[2776]: I0509 23:59:12.164573 2776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:59:12.166247 kubelet[2776]: I0509 23:59:12.166229 2776 factory.go:221] Registration of the containerd container factory successfully May 9 23:59:12.171696 kubelet[2776]: E0509 23:59:12.171657 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-84ab9604c4?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" May 9 23:59:12.194914 kubelet[2776]: I0509 23:59:12.194865 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:59:12.196488 kubelet[2776]: I0509 23:59:12.196279 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:59:12.196488 kubelet[2776]: I0509 23:59:12.196436 2776 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:59:12.197191 kubelet[2776]: I0509 23:59:12.196699 2776 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:59:12.197191 kubelet[2776]: I0509 23:59:12.196835 2776 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:59:12.197191 kubelet[2776]: E0509 23:59:12.196883 2776 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:59:12.200743 kubelet[2776]: W0509 23:59:12.200577 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:12.200743 kubelet[2776]: E0509 23:59:12.200665 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:12.226086 kubelet[2776]: I0509 23:59:12.226049 2776 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:59:12.226086 kubelet[2776]: I0509 23:59:12.226073 2776 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:59:12.226086 kubelet[2776]: I0509 23:59:12.226093 2776 state_mem.go:36] "Initialized new in-memory state store" May 9 23:59:12.231059 kubelet[2776]: I0509 23:59:12.231029 2776 policy_none.go:49] "None policy: Start" May 9 23:59:12.231059 kubelet[2776]: I0509 23:59:12.231059 2776 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:59:12.231059 kubelet[2776]: I0509 23:59:12.231070 2776 state_mem.go:35] "Initializing new in-memory state store" May 9 23:59:12.239796 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:59:12.254786 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:59:12.258756 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:59:12.263739 kubelet[2776]: E0509 23:59:12.263689 2776 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" May 9 23:59:12.270843 kubelet[2776]: I0509 23:59:12.270816 2776 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:59:12.271689 kubelet[2776]: I0509 23:59:12.271182 2776 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:59:12.271689 kubelet[2776]: I0509 23:59:12.271210 2776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:59:12.271689 kubelet[2776]: I0509 23:59:12.271483 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:59:12.273179 kubelet[2776]: E0509 23:59:12.272789 2776 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:59:12.273957 kubelet[2776]: E0509 23:59:12.273282 2776 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-84ab9604c4\" not found" May 9 23:59:12.308582 systemd[1]: Created slice kubepods-burstable-pod4d7b5afbc74d28a9f19a9797239e61b9.slice - libcontainer container kubepods-burstable-pod4d7b5afbc74d28a9f19a9797239e61b9.slice. May 9 23:59:12.324333 kubelet[2776]: E0509 23:59:12.324292 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.327737 systemd[1]: Created slice kubepods-burstable-podf11e0c765efa6c532095e0ed1b18a42b.slice - libcontainer container kubepods-burstable-podf11e0c765efa6c532095e0ed1b18a42b.slice. May 9 23:59:12.337366 kubelet[2776]: E0509 23:59:12.337331 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.340274 systemd[1]: Created slice kubepods-burstable-podb5362ccd8d45bdb6d8a387f9e16e7591.slice - libcontainer container kubepods-burstable-podb5362ccd8d45bdb6d8a387f9e16e7591.slice. May 9 23:59:12.343123 kubelet[2776]: E0509 23:59:12.343088 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364613 kubelet[2776]: I0509 23:59:12.364576 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364613 kubelet[2776]: I0509 23:59:12.364615 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364613 kubelet[2776]: I0509 23:59:12.364649 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364613 kubelet[2776]: I0509 23:59:12.364667 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364921 kubelet[2776]: I0509 23:59:12.364687 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5362ccd8d45bdb6d8a387f9e16e7591-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-84ab9604c4\" (UID: \"b5362ccd8d45bdb6d8a387f9e16e7591\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364921 kubelet[2776]: I0509 23:59:12.364703 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d7b5afbc74d28a9f19a9797239e61b9-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" (UID: \"4d7b5afbc74d28a9f19a9797239e61b9\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364921 kubelet[2776]: I0509 23:59:12.364716 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d7b5afbc74d28a9f19a9797239e61b9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" (UID: \"4d7b5afbc74d28a9f19a9797239e61b9\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364921 kubelet[2776]: I0509 23:59:12.364735 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d7b5afbc74d28a9f19a9797239e61b9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" (UID: \"4d7b5afbc74d28a9f19a9797239e61b9\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.364921 kubelet[2776]: I0509 23:59:12.364760 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.373503 kubelet[2776]: E0509 23:59:12.373203 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-84ab9604c4?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" May 9 23:59:12.374251 kubelet[2776]: I0509 23:59:12.374219 2776 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.374608 kubelet[2776]: E0509 23:59:12.374559 2776 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.576887 kubelet[2776]: I0509 23:59:12.576808 2776 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.577471 kubelet[2776]: E0509 23:59:12.577243 2776 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.626619 containerd[1722]: time="2025-05-09T23:59:12.626354026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-84ab9604c4,Uid:4d7b5afbc74d28a9f19a9797239e61b9,Namespace:kube-system,Attempt:0,}" May 9 23:59:12.639232 containerd[1722]: time="2025-05-09T23:59:12.639026664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-84ab9604c4,Uid:f11e0c765efa6c532095e0ed1b18a42b,Namespace:kube-system,Attempt:0,}" May 9 23:59:12.644062 containerd[1722]: time="2025-05-09T23:59:12.643971263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-84ab9604c4,Uid:b5362ccd8d45bdb6d8a387f9e16e7591,Namespace:kube-system,Attempt:0,}" May 9 23:59:12.774620 kubelet[2776]: E0509 23:59:12.774574 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-84ab9604c4?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" May 9 23:59:12.979573 kubelet[2776]: I0509 23:59:12.979166 2776 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:12.979573 kubelet[2776]: E0509 23:59:12.979488 2776 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:13.090836 kubelet[2776]: W0509 23:59:13.090794 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:13.090976 kubelet[2776]: E0509 23:59:13.090842 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:13.371305 kubelet[2776]: W0509 23:59:13.371164 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:13.371305 kubelet[2776]: E0509 23:59:13.371233 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:13.435196 kubelet[2776]: W0509 23:59:13.435098 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:13.435196 kubelet[2776]: E0509 23:59:13.435164 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:13.439850 kubelet[2776]: W0509 23:59:13.439802 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-84ab9604c4&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:13.439915 kubelet[2776]: E0509 23:59:13.439863 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-84ab9604c4&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:13.575528 kubelet[2776]: E0509 23:59:13.575480 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-84ab9604c4?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" May 9 23:59:13.781322 kubelet[2776]: I0509 23:59:13.781264 2776 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:13.781718 kubelet[2776]: E0509 23:59:13.781672 2776 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:14.229980 kubelet[2776]: E0509 23:59:14.229926 2776 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:14.519501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460447282.mount: Deactivated successfully. May 9 23:59:14.564451 containerd[1722]: time="2025-05-09T23:59:14.564391226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:14.567392 containerd[1722]: time="2025-05-09T23:59:14.567339545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 9 23:59:14.572943 containerd[1722]: time="2025-05-09T23:59:14.572895544Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:14.576471 containerd[1722]: time="2025-05-09T23:59:14.575858623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:14.584721 containerd[1722]: time="2025-05-09T23:59:14.584676262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:59:14.591392 containerd[1722]: time="2025-05-09T23:59:14.591345140Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:14.601901 containerd[1722]: time="2025-05-09T23:59:14.601825698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:59:14.609424 containerd[1722]: time="2025-05-09T23:59:14.609365696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:14.610571 containerd[1722]: time="2025-05-09T23:59:14.610301296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.966255513s" May 9 23:59:14.612673 containerd[1722]: time="2025-05-09T23:59:14.612608736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.973503272s" May 9 23:59:14.613363 containerd[1722]: time="2025-05-09T23:59:14.613328096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.98689375s" May 9 23:59:15.026315 kubelet[2776]: W0509 23:59:15.026217 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:15.026315 kubelet[2776]: E0509 23:59:15.026274 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:15.102523 kubelet[2776]: W0509 23:59:15.102483 2776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 9 23:59:15.102523 kubelet[2776]: E0509 23:59:15.102528 2776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 9 23:59:15.176681 kubelet[2776]: E0509 23:59:15.175974 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-84ab9604c4?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="3.2s" May 9 23:59:15.190987 containerd[1722]: time="2025-05-09T23:59:15.190709896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:15.190987 containerd[1722]: time="2025-05-09T23:59:15.190802576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:15.190987 containerd[1722]: time="2025-05-09T23:59:15.190818816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:15.190987 containerd[1722]: time="2025-05-09T23:59:15.190915416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:15.197094 containerd[1722]: time="2025-05-09T23:59:15.196477175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:15.197094 containerd[1722]: time="2025-05-09T23:59:15.196532695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:15.197094 containerd[1722]: time="2025-05-09T23:59:15.196551775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:15.197094 containerd[1722]: time="2025-05-09T23:59:15.196249375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:15.197094 containerd[1722]: time="2025-05-09T23:59:15.196318255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:15.197094 containerd[1722]: time="2025-05-09T23:59:15.196346695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:15.197094 containerd[1722]: time="2025-05-09T23:59:15.196435295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:15.197503 containerd[1722]: time="2025-05-09T23:59:15.197002855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:15.242889 systemd[1]: Started cri-containerd-140f1e281d1746919d171bb3965b8515753d35ac03758c68b54f154ca90a7cfc.scope - libcontainer container 140f1e281d1746919d171bb3965b8515753d35ac03758c68b54f154ca90a7cfc. May 9 23:59:15.245458 systemd[1]: Started cri-containerd-87022c09ab2f87541725ff5feb5f256961885c9dee7600c7ecb87ad2fa8d475f.scope - libcontainer container 87022c09ab2f87541725ff5feb5f256961885c9dee7600c7ecb87ad2fa8d475f. May 9 23:59:15.251451 systemd[1]: Started cri-containerd-f9275b20ac6182d3f8a5a30dad9e8ec5a992440877de563a0318a0a8a218b4f3.scope - libcontainer container f9275b20ac6182d3f8a5a30dad9e8ec5a992440877de563a0318a0a8a218b4f3. May 9 23:59:15.298728 containerd[1722]: time="2025-05-09T23:59:15.298212554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-84ab9604c4,Uid:b5362ccd8d45bdb6d8a387f9e16e7591,Namespace:kube-system,Attempt:0,} returns sandbox id \"140f1e281d1746919d171bb3965b8515753d35ac03758c68b54f154ca90a7cfc\"" May 9 23:59:15.303817 containerd[1722]: time="2025-05-09T23:59:15.303725073Z" level=info msg="CreateContainer within sandbox \"140f1e281d1746919d171bb3965b8515753d35ac03758c68b54f154ca90a7cfc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 23:59:15.323438 containerd[1722]: time="2025-05-09T23:59:15.323384349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-84ab9604c4,Uid:4d7b5afbc74d28a9f19a9797239e61b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"87022c09ab2f87541725ff5feb5f256961885c9dee7600c7ecb87ad2fa8d475f\"" May 9 23:59:15.326000 containerd[1722]: time="2025-05-09T23:59:15.325566468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-84ab9604c4,Uid:f11e0c765efa6c532095e0ed1b18a42b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9275b20ac6182d3f8a5a30dad9e8ec5a992440877de563a0318a0a8a218b4f3\"" May 9 23:59:15.329028 containerd[1722]: time="2025-05-09T23:59:15.328986148Z" level=info msg="CreateContainer within sandbox \"87022c09ab2f87541725ff5feb5f256961885c9dee7600c7ecb87ad2fa8d475f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 23:59:15.329321 containerd[1722]: time="2025-05-09T23:59:15.328988308Z" level=info msg="CreateContainer within sandbox \"f9275b20ac6182d3f8a5a30dad9e8ec5a992440877de563a0318a0a8a218b4f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 23:59:15.386070 kubelet[2776]: I0509 23:59:15.386034 2776 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:15.386411 kubelet[2776]: E0509 23:59:15.386384 2776 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:15.426677 containerd[1722]: time="2025-05-09T23:59:15.426468368Z" level=info msg="CreateContainer within sandbox \"140f1e281d1746919d171bb3965b8515753d35ac03758c68b54f154ca90a7cfc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c693380a6622d3c0eda689ff1b215409adc94b76c24d024b9fd0b425b20ca56a\"" May 9 23:59:15.427205 containerd[1722]: time="2025-05-09T23:59:15.427177127Z" level=info msg="StartContainer for \"c693380a6622d3c0eda689ff1b215409adc94b76c24d024b9fd0b425b20ca56a\"" May 9 23:59:15.440241 containerd[1722]: time="2025-05-09T23:59:15.440173005Z" level=info msg="CreateContainer within sandbox \"f9275b20ac6182d3f8a5a30dad9e8ec5a992440877de563a0318a0a8a218b4f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7cd3fadfda3a62adcc19279c183d7047a096d9e1e3f071165acf2499adb7786b\"" May 9 23:59:15.441573 containerd[1722]: time="2025-05-09T23:59:15.440836685Z" level=info msg="StartContainer for \"7cd3fadfda3a62adcc19279c183d7047a096d9e1e3f071165acf2499adb7786b\"" May 9 23:59:15.459776 containerd[1722]: time="2025-05-09T23:59:15.459726401Z" level=info msg="CreateContainer within sandbox \"87022c09ab2f87541725ff5feb5f256961885c9dee7600c7ecb87ad2fa8d475f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b22297e1c1679e7c5e05342e7290896effaca140ddac837aa2622279b5b55989\"" May 9 23:59:15.460109 systemd[1]: Started cri-containerd-c693380a6622d3c0eda689ff1b215409adc94b76c24d024b9fd0b425b20ca56a.scope - libcontainer container c693380a6622d3c0eda689ff1b215409adc94b76c24d024b9fd0b425b20ca56a. May 9 23:59:15.460713 containerd[1722]: time="2025-05-09T23:59:15.460679640Z" level=info msg="StartContainer for \"b22297e1c1679e7c5e05342e7290896effaca140ddac837aa2622279b5b55989\"" May 9 23:59:15.491862 systemd[1]: Started cri-containerd-7cd3fadfda3a62adcc19279c183d7047a096d9e1e3f071165acf2499adb7786b.scope - libcontainer container 7cd3fadfda3a62adcc19279c183d7047a096d9e1e3f071165acf2499adb7786b. May 9 23:59:15.509880 systemd[1]: Started cri-containerd-b22297e1c1679e7c5e05342e7290896effaca140ddac837aa2622279b5b55989.scope - libcontainer container b22297e1c1679e7c5e05342e7290896effaca140ddac837aa2622279b5b55989. May 9 23:59:15.540781 containerd[1722]: time="2025-05-09T23:59:15.540443464Z" level=info msg="StartContainer for \"c693380a6622d3c0eda689ff1b215409adc94b76c24d024b9fd0b425b20ca56a\" returns successfully" May 9 23:59:15.586599 containerd[1722]: time="2025-05-09T23:59:15.585758175Z" level=info msg="StartContainer for \"b22297e1c1679e7c5e05342e7290896effaca140ddac837aa2622279b5b55989\" returns successfully" May 9 23:59:15.586599 containerd[1722]: time="2025-05-09T23:59:15.585758375Z" level=info msg="StartContainer for \"7cd3fadfda3a62adcc19279c183d7047a096d9e1e3f071165acf2499adb7786b\" returns successfully" May 9 23:59:16.213662 kubelet[2776]: E0509 23:59:16.213345 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:16.216118 kubelet[2776]: E0509 23:59:16.216013 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:16.218600 kubelet[2776]: E0509 23:59:16.218566 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:17.221864 kubelet[2776]: E0509 23:59:17.221821 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:17.222298 kubelet[2776]: E0509 23:59:17.222140 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.225745 kubelet[2776]: E0509 23:59:18.224221 2776 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.417246 kubelet[2776]: E0509 23:59:18.417195 2776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-84ab9604c4\" not found" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.504351 kubelet[2776]: E0509 23:59:18.503007 2776 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.3-n-84ab9604c4.183e0150d531d264 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-84ab9604c4,UID:ci-4081.3.3-n-84ab9604c4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-84ab9604c4,},FirstTimestamp:2025-05-09 23:59:12.152154724 +0000 UTC m=+0.715013706,LastTimestamp:2025-05-09 23:59:12.152154724 +0000 UTC m=+0.715013706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-84ab9604c4,}" May 9 23:59:18.589298 kubelet[2776]: I0509 23:59:18.589261 2776 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.602140 kubelet[2776]: I0509 23:59:18.602084 2776 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.665196 kubelet[2776]: I0509 23:59:18.665146 2776 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.687693 kubelet[2776]: E0509 23:59:18.687631 2776 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.687693 kubelet[2776]: I0509 23:59:18.687684 2776 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.693206 kubelet[2776]: E0509 23:59:18.693165 2776 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.693206 kubelet[2776]: I0509 23:59:18.693197 2776 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-84ab9604c4" May 9 23:59:18.695654 kubelet[2776]: E0509 23:59:18.695606 2776 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-84ab9604c4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-n-84ab9604c4" May 9 23:59:19.152406 kubelet[2776]: I0509 23:59:19.152111 2776 apiserver.go:52] "Watching apiserver" May 9 23:59:19.163706 kubelet[2776]: I0509 23:59:19.163659 2776 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:59:20.470622 systemd[1]: Reloading requested from client PID 3051 ('systemctl') (unit session-9.scope)... May 9 23:59:20.470671 systemd[1]: Reloading... May 9 23:59:20.612679 zram_generator::config[3087]: No configuration found. May 9 23:59:20.725391 kubelet[2776]: I0509 23:59:20.725267 2776 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:20.749649 kubelet[2776]: W0509 23:59:20.748012 2776 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 23:59:20.768229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:20.869290 systemd[1]: Reloading finished in 398 ms. May 9 23:59:20.914238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:20.933098 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:59:20.933494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:20.933686 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 121.0M memory peak, 0B memory swap peak. May 9 23:59:20.942589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:21.099815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:21.110005 (kubelet)[3155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:59:21.174198 kubelet[3155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:59:21.175291 kubelet[3155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:59:21.175291 kubelet[3155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:59:21.175291 kubelet[3155]: I0509 23:59:21.174671 3155 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:59:21.183679 kubelet[3155]: I0509 23:59:21.182832 3155 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:59:21.183679 kubelet[3155]: I0509 23:59:21.182876 3155 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:59:21.183679 kubelet[3155]: I0509 23:59:21.183185 3155 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:59:21.185208 kubelet[3155]: I0509 23:59:21.185173 3155 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 23:59:21.187864 kubelet[3155]: I0509 23:59:21.187814 3155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:59:21.191500 kubelet[3155]: E0509 23:59:21.191445 3155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:59:21.191500 kubelet[3155]: I0509 23:59:21.191486 3155 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:59:21.194819 kubelet[3155]: I0509 23:59:21.194784 3155 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:59:21.195020 kubelet[3155]: I0509 23:59:21.194982 3155 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:59:21.195200 kubelet[3155]: I0509 23:59:21.195018 3155 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-84ab9604c4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:59:21.195304 kubelet[3155]: I0509 23:59:21.195206 3155 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:59:21.195304 kubelet[3155]: I0509 23:59:21.195215 3155 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:59:21.195304 kubelet[3155]: I0509 23:59:21.195259 3155 state_mem.go:36] "Initialized new in-memory state store" May 9 23:59:21.195409 kubelet[3155]: I0509 23:59:21.195395 3155 kubelet.go:446] "Attempting to sync node with API server" May 9 23:59:21.195441 kubelet[3155]: I0509 23:59:21.195410 3155 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:59:21.195475 kubelet[3155]: I0509 23:59:21.195455 3155 kubelet.go:352] "Adding apiserver pod source" May 9 23:59:21.195475 kubelet[3155]: I0509 23:59:21.195464 3155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:59:21.199258 kubelet[3155]: I0509 23:59:21.199224 3155 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:59:21.200059 kubelet[3155]: I0509 23:59:21.200037 3155 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:59:21.201683 kubelet[3155]: I0509 23:59:21.201377 3155 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:59:21.201683 kubelet[3155]: I0509 23:59:21.201420 3155 server.go:1287] "Started kubelet" May 9 23:59:21.206798 kubelet[3155]: I0509 23:59:21.206661 3155 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:59:21.207885 kubelet[3155]: I0509 23:59:21.207601 3155 server.go:490] "Adding debug handlers to kubelet server" May 9 23:59:21.208292 kubelet[3155]: I0509 23:59:21.207565 3155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:59:21.209618 kubelet[3155]: I0509 23:59:21.209251 3155 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:59:21.210700 kubelet[3155]: I0509 23:59:21.209820 3155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:59:21.216067 kubelet[3155]: I0509 23:59:21.215253 3155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:59:21.216419 kubelet[3155]: I0509 23:59:21.216393 3155 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:59:21.219641 kubelet[3155]: E0509 23:59:21.216675 3155 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-84ab9604c4\" not found" May 9 23:59:21.219641 kubelet[3155]: I0509 23:59:21.218544 3155 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:59:21.219641 kubelet[3155]: I0509 23:59:21.218704 3155 reconciler.go:26] "Reconciler: start to sync state" May 9 23:59:21.221688 kubelet[3155]: I0509 23:59:21.220591 3155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:59:21.222648 kubelet[3155]: I0509 23:59:21.221786 3155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:59:21.222648 kubelet[3155]: I0509 23:59:21.221818 3155 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:59:21.222648 kubelet[3155]: I0509 23:59:21.221842 3155 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:59:21.222648 kubelet[3155]: I0509 23:59:21.221848 3155 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:59:21.222648 kubelet[3155]: E0509 23:59:21.221893 3155 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:59:21.232416 kubelet[3155]: I0509 23:59:21.232385 3155 factory.go:221] Registration of the systemd container factory successfully May 9 23:59:21.233854 kubelet[3155]: I0509 23:59:21.233816 3155 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:59:21.257307 kubelet[3155]: E0509 23:59:21.257156 3155 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:59:21.257731 kubelet[3155]: I0509 23:59:21.257708 3155 factory.go:221] Registration of the containerd container factory successfully May 9 23:59:21.317615 kubelet[3155]: I0509 23:59:21.317581 3155 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:59:21.317615 kubelet[3155]: I0509 23:59:21.317604 3155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:59:21.317615 kubelet[3155]: I0509 23:59:21.317631 3155 state_mem.go:36] "Initialized new in-memory state store" May 9 23:59:21.317857 kubelet[3155]: I0509 23:59:21.317847 3155 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 23:59:21.317881 kubelet[3155]: I0509 23:59:21.317859 3155 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 23:59:21.317881 kubelet[3155]: I0509 23:59:21.317878 3155 policy_none.go:49] "None policy: Start" May 9 23:59:21.317922 kubelet[3155]: I0509 23:59:21.317886 3155 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:59:21.317922 kubelet[3155]: I0509 23:59:21.317895 3155 state_mem.go:35] "Initializing new in-memory state store" May 9 23:59:21.318023 kubelet[3155]: I0509 23:59:21.317991 3155 state_mem.go:75] "Updated machine memory state" May 9 23:59:21.323335 kubelet[3155]: E0509 23:59:21.322374 3155 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 23:59:21.323335 kubelet[3155]: I0509 23:59:21.322561 3155 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:59:21.323335 kubelet[3155]: I0509 23:59:21.322762 3155 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:59:21.323335 kubelet[3155]: I0509 23:59:21.322774 3155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:59:21.323900 kubelet[3155]: I0509 23:59:21.323867 3155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:59:21.326646 kubelet[3155]: E0509 23:59:21.326597 3155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:59:21.443624 kubelet[3155]: I0509 23:59:21.442333 3155 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.470456 kubelet[3155]: I0509 23:59:21.469971 3155 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.470456 kubelet[3155]: I0509 23:59:21.470061 3155 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.523695 kubelet[3155]: I0509 23:59:21.523266 3155 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.525109 kubelet[3155]: I0509 23:59:21.524282 3155 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.525109 kubelet[3155]: I0509 23:59:21.524348 3155 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.536708 kubelet[3155]: W0509 23:59:21.536351 3155 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 23:59:21.541316 kubelet[3155]: W0509 23:59:21.541205 3155 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 23:59:21.548709 kubelet[3155]: W0509 23:59:21.548677 3155 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 23:59:21.549266 kubelet[3155]: E0509 23:59:21.549019 3155 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620396 kubelet[3155]: I0509 23:59:21.620128 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620396 kubelet[3155]: I0509 23:59:21.620181 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620396 kubelet[3155]: I0509 23:59:21.620202 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5362ccd8d45bdb6d8a387f9e16e7591-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-84ab9604c4\" (UID: \"b5362ccd8d45bdb6d8a387f9e16e7591\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620396 kubelet[3155]: I0509 23:59:21.620219 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d7b5afbc74d28a9f19a9797239e61b9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" (UID: \"4d7b5afbc74d28a9f19a9797239e61b9\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620396 kubelet[3155]: I0509 23:59:21.620239 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d7b5afbc74d28a9f19a9797239e61b9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" (UID: \"4d7b5afbc74d28a9f19a9797239e61b9\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620675 kubelet[3155]: I0509 23:59:21.620258 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620675 kubelet[3155]: I0509 23:59:21.620273 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d7b5afbc74d28a9f19a9797239e61b9-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" (UID: \"4d7b5afbc74d28a9f19a9797239e61b9\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620675 kubelet[3155]: I0509 23:59:21.620290 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:21.620675 kubelet[3155]: I0509 23:59:21.620308 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f11e0c765efa6c532095e0ed1b18a42b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-84ab9604c4\" (UID: \"f11e0c765efa6c532095e0ed1b18a42b\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" May 9 23:59:22.196625 kubelet[3155]: I0509 23:59:22.196576 3155 apiserver.go:52] "Watching apiserver" May 9 23:59:22.221520 kubelet[3155]: I0509 23:59:22.219192 3155 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:59:22.293415 kubelet[3155]: I0509 23:59:22.292797 3155 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:22.317796 kubelet[3155]: W0509 23:59:22.317471 3155 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 23:59:22.317796 kubelet[3155]: E0509 23:59:22.317551 3155 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-84ab9604c4\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" May 9 23:59:22.379250 kubelet[3155]: I0509 23:59:22.379171 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-84ab9604c4" podStartSLOduration=1.379149704 podStartE2EDuration="1.379149704s" podCreationTimestamp="2025-05-09 23:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:22.345925631 +0000 UTC m=+1.231828171" watchObservedRunningTime="2025-05-09 23:59:22.379149704 +0000 UTC m=+1.265052284" May 9 23:59:22.404957 kubelet[3155]: I0509 23:59:22.404536 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-84ab9604c4" podStartSLOduration=2.404510698 podStartE2EDuration="2.404510698s" podCreationTimestamp="2025-05-09 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:22.379814384 +0000 UTC m=+1.265716964" watchObservedRunningTime="2025-05-09 23:59:22.404510698 +0000 UTC m=+1.290413278" May 9 23:59:25.615865 kubelet[3155]: I0509 23:59:25.615586 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-84ab9604c4" podStartSLOduration=4.615566275 podStartE2EDuration="4.615566275s" podCreationTimestamp="2025-05-09 23:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:22.406566138 +0000 UTC m=+1.292468718" watchObservedRunningTime="2025-05-09 23:59:25.615566275 +0000 UTC m=+4.501468855" May 9 23:59:26.164206 kubelet[3155]: I0509 23:59:26.164155 3155 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 23:59:26.164726 containerd[1722]: time="2025-05-09T23:59:26.164674555Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:59:26.165054 kubelet[3155]: I0509 23:59:26.164924 3155 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 23:59:26.366998 sudo[2241]: pam_unix(sudo:session): session closed for user root May 9 23:59:26.437938 sshd[2238]: pam_unix(sshd:session): session closed for user core May 9 23:59:26.443330 systemd-logind[1686]: Session 9 logged out. Waiting for processes to exit. May 9 23:59:26.444518 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:45138.service: Deactivated successfully. May 9 23:59:26.447148 systemd[1]: session-9.scope: Deactivated successfully. May 9 23:59:26.447571 systemd[1]: session-9.scope: Consumed 7.099s CPU time, 150.7M memory peak, 0B memory swap peak. May 9 23:59:26.448838 systemd-logind[1686]: Removed session 9. May 9 23:59:26.838874 systemd[1]: Created slice kubepods-besteffort-pod6abd5e4e_1ead_4ff5_9042_52fad65a097c.slice - libcontainer container kubepods-besteffort-pod6abd5e4e_1ead_4ff5_9042_52fad65a097c.slice. May 9 23:59:26.853709 kubelet[3155]: I0509 23:59:26.853453 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6abd5e4e-1ead-4ff5-9042-52fad65a097c-lib-modules\") pod \"kube-proxy-98d8c\" (UID: \"6abd5e4e-1ead-4ff5-9042-52fad65a097c\") " pod="kube-system/kube-proxy-98d8c" May 9 23:59:26.853709 kubelet[3155]: I0509 23:59:26.853501 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6abd5e4e-1ead-4ff5-9042-52fad65a097c-kube-proxy\") pod \"kube-proxy-98d8c\" (UID: \"6abd5e4e-1ead-4ff5-9042-52fad65a097c\") " pod="kube-system/kube-proxy-98d8c" May 9 23:59:26.853709 kubelet[3155]: I0509 23:59:26.853576 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6abd5e4e-1ead-4ff5-9042-52fad65a097c-xtables-lock\") pod \"kube-proxy-98d8c\" (UID: \"6abd5e4e-1ead-4ff5-9042-52fad65a097c\") " pod="kube-system/kube-proxy-98d8c" May 9 23:59:26.853709 kubelet[3155]: I0509 23:59:26.853601 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghxx\" (UniqueName: \"kubernetes.io/projected/6abd5e4e-1ead-4ff5-9042-52fad65a097c-kube-api-access-mghxx\") pod \"kube-proxy-98d8c\" (UID: \"6abd5e4e-1ead-4ff5-9042-52fad65a097c\") " pod="kube-system/kube-proxy-98d8c" May 9 23:59:26.966698 kubelet[3155]: E0509 23:59:26.966629 3155 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 9 23:59:26.966698 kubelet[3155]: E0509 23:59:26.966694 3155 projected.go:194] Error preparing data for projected volume kube-api-access-mghxx for pod kube-system/kube-proxy-98d8c: configmap "kube-root-ca.crt" not found May 9 23:59:26.966864 kubelet[3155]: E0509 23:59:26.966763 3155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6abd5e4e-1ead-4ff5-9042-52fad65a097c-kube-api-access-mghxx podName:6abd5e4e-1ead-4ff5-9042-52fad65a097c nodeName:}" failed. No retries permitted until 2025-05-09 23:59:27.466741379 +0000 UTC m=+6.352643959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mghxx" (UniqueName: "kubernetes.io/projected/6abd5e4e-1ead-4ff5-9042-52fad65a097c-kube-api-access-mghxx") pod "kube-proxy-98d8c" (UID: "6abd5e4e-1ead-4ff5-9042-52fad65a097c") : configmap "kube-root-ca.crt" not found May 9 23:59:27.209837 systemd[1]: Created slice kubepods-besteffort-podf2874e99_b544_4ca5_b442_cf3e9d78046b.slice - libcontainer container kubepods-besteffort-podf2874e99_b544_4ca5_b442_cf3e9d78046b.slice. May 9 23:59:27.256265 kubelet[3155]: I0509 23:59:27.256146 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2874e99-b544-4ca5-b442-cf3e9d78046b-var-lib-calico\") pod \"tigera-operator-789496d6f5-95zfl\" (UID: \"f2874e99-b544-4ca5-b442-cf3e9d78046b\") " pod="tigera-operator/tigera-operator-789496d6f5-95zfl" May 9 23:59:27.256265 kubelet[3155]: I0509 23:59:27.256208 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsqz\" (UniqueName: \"kubernetes.io/projected/f2874e99-b544-4ca5-b442-cf3e9d78046b-kube-api-access-gnsqz\") pod \"tigera-operator-789496d6f5-95zfl\" (UID: \"f2874e99-b544-4ca5-b442-cf3e9d78046b\") " pod="tigera-operator/tigera-operator-789496d6f5-95zfl" May 9 23:59:27.515593 containerd[1722]: time="2025-05-09T23:59:27.515062659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-95zfl,Uid:f2874e99-b544-4ca5-b442-cf3e9d78046b,Namespace:tigera-operator,Attempt:0,}" May 9 23:59:27.569846 containerd[1722]: time="2025-05-09T23:59:27.569072968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:27.569846 containerd[1722]: time="2025-05-09T23:59:27.569143488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:27.569846 containerd[1722]: time="2025-05-09T23:59:27.569160888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:27.569846 containerd[1722]: time="2025-05-09T23:59:27.569287367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:27.593856 systemd[1]: Started cri-containerd-72e4451cd86ebe8bbf3089e879038536b0fdc1a4d1925d529918f396a99ba633.scope - libcontainer container 72e4451cd86ebe8bbf3089e879038536b0fdc1a4d1925d529918f396a99ba633. May 9 23:59:27.623434 containerd[1722]: time="2025-05-09T23:59:27.622969076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-95zfl,Uid:f2874e99-b544-4ca5-b442-cf3e9d78046b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"72e4451cd86ebe8bbf3089e879038536b0fdc1a4d1925d529918f396a99ba633\"" May 9 23:59:27.626746 containerd[1722]: time="2025-05-09T23:59:27.626495795Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 9 23:59:27.749539 containerd[1722]: time="2025-05-09T23:59:27.749479648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98d8c,Uid:6abd5e4e-1ead-4ff5-9042-52fad65a097c,Namespace:kube-system,Attempt:0,}" May 9 23:59:27.818007 containerd[1722]: time="2025-05-09T23:59:27.817473353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:27.818007 containerd[1722]: time="2025-05-09T23:59:27.817545993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:27.818007 containerd[1722]: time="2025-05-09T23:59:27.817561033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:27.818007 containerd[1722]: time="2025-05-09T23:59:27.817841833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:27.840862 systemd[1]: Started cri-containerd-192a96f673bf95ba117f560ddf6d6cce848641a4d7398f4162331db551f4685e.scope - libcontainer container 192a96f673bf95ba117f560ddf6d6cce848641a4d7398f4162331db551f4685e. May 9 23:59:27.862915 containerd[1722]: time="2025-05-09T23:59:27.862669463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98d8c,Uid:6abd5e4e-1ead-4ff5-9042-52fad65a097c,Namespace:kube-system,Attempt:0,} returns sandbox id \"192a96f673bf95ba117f560ddf6d6cce848641a4d7398f4162331db551f4685e\"" May 9 23:59:27.867771 containerd[1722]: time="2025-05-09T23:59:27.867720022Z" level=info msg="CreateContainer within sandbox \"192a96f673bf95ba117f560ddf6d6cce848641a4d7398f4162331db551f4685e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:59:27.915890 containerd[1722]: time="2025-05-09T23:59:27.915828612Z" level=info msg="CreateContainer within sandbox \"192a96f673bf95ba117f560ddf6d6cce848641a4d7398f4162331db551f4685e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4cb706ea16677d9bb3856a7a37cee09abb6d6b63b34393384b8caeb6407319b\"" May 9 23:59:27.916805 containerd[1722]: time="2025-05-09T23:59:27.916768011Z" level=info msg="StartContainer for \"a4cb706ea16677d9bb3856a7a37cee09abb6d6b63b34393384b8caeb6407319b\"" May 9 23:59:27.944445 systemd[1]: Started cri-containerd-a4cb706ea16677d9bb3856a7a37cee09abb6d6b63b34393384b8caeb6407319b.scope - libcontainer container a4cb706ea16677d9bb3856a7a37cee09abb6d6b63b34393384b8caeb6407319b. May 9 23:59:27.977975 containerd[1722]: time="2025-05-09T23:59:27.977898038Z" level=info msg="StartContainer for \"a4cb706ea16677d9bb3856a7a37cee09abb6d6b63b34393384b8caeb6407319b\" returns successfully" May 9 23:59:28.549134 kubelet[3155]: I0509 23:59:28.549070 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-98d8c" podStartSLOduration=2.5490485080000003 podStartE2EDuration="2.549048508s" podCreationTimestamp="2025-05-09 23:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:28.321778521 +0000 UTC m=+7.207681141" watchObservedRunningTime="2025-05-09 23:59:28.549048508 +0000 UTC m=+7.434951088" May 9 23:59:29.392504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41597492.mount: Deactivated successfully. May 9 23:59:29.873979 containerd[1722]: time="2025-05-09T23:59:29.873920517Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:29.877191 containerd[1722]: time="2025-05-09T23:59:29.877023276Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 9 23:59:29.882982 containerd[1722]: time="2025-05-09T23:59:29.882125635Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:29.890437 containerd[1722]: time="2025-05-09T23:59:29.890375353Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:29.891589 containerd[1722]: time="2025-05-09T23:59:29.891029193Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.264475238s" May 9 23:59:29.891589 containerd[1722]: time="2025-05-09T23:59:29.891066233Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 9 23:59:29.893986 containerd[1722]: time="2025-05-09T23:59:29.893933472Z" level=info msg="CreateContainer within sandbox \"72e4451cd86ebe8bbf3089e879038536b0fdc1a4d1925d529918f396a99ba633\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 9 23:59:29.940901 containerd[1722]: time="2025-05-09T23:59:29.940842981Z" level=info msg="CreateContainer within sandbox \"72e4451cd86ebe8bbf3089e879038536b0fdc1a4d1925d529918f396a99ba633\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d34887309fe4a2376bc36a59a340d6f91348dbaf2c998972619222603a792552\"" May 9 23:59:29.942056 containerd[1722]: time="2025-05-09T23:59:29.941817341Z" level=info msg="StartContainer for \"d34887309fe4a2376bc36a59a340d6f91348dbaf2c998972619222603a792552\"" May 9 23:59:29.973956 systemd[1]: Started cri-containerd-d34887309fe4a2376bc36a59a340d6f91348dbaf2c998972619222603a792552.scope - libcontainer container d34887309fe4a2376bc36a59a340d6f91348dbaf2c998972619222603a792552. May 9 23:59:30.007442 containerd[1722]: time="2025-05-09T23:59:30.007371885Z" level=info msg="StartContainer for \"d34887309fe4a2376bc36a59a340d6f91348dbaf2c998972619222603a792552\" returns successfully" May 9 23:59:34.773225 kubelet[3155]: I0509 23:59:34.773125 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-95zfl" podStartSLOduration=5.506543049 podStartE2EDuration="7.773100487s" podCreationTimestamp="2025-05-09 23:59:27 +0000 UTC" firstStartedPulling="2025-05-09 23:59:27.625343155 +0000 UTC m=+6.511245735" lastFinishedPulling="2025-05-09 23:59:29.891900593 +0000 UTC m=+8.777803173" observedRunningTime="2025-05-09 23:59:30.336619528 +0000 UTC m=+9.222522148" watchObservedRunningTime="2025-05-09 23:59:34.773100487 +0000 UTC m=+13.659003067" May 9 23:59:34.786852 systemd[1]: Created slice kubepods-besteffort-pod7a684987_e4b0_40fc_9e57_1a51a1c59cb9.slice - libcontainer container kubepods-besteffort-pod7a684987_e4b0_40fc_9e57_1a51a1c59cb9.slice. May 9 23:59:34.803592 kubelet[3155]: I0509 23:59:34.803439 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7a684987-e4b0-40fc-9e57-1a51a1c59cb9-typha-certs\") pod \"calico-typha-78cb5dc7dd-tqsqw\" (UID: \"7a684987-e4b0-40fc-9e57-1a51a1c59cb9\") " pod="calico-system/calico-typha-78cb5dc7dd-tqsqw" May 9 23:59:34.803592 kubelet[3155]: I0509 23:59:34.803486 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a684987-e4b0-40fc-9e57-1a51a1c59cb9-tigera-ca-bundle\") pod \"calico-typha-78cb5dc7dd-tqsqw\" (UID: \"7a684987-e4b0-40fc-9e57-1a51a1c59cb9\") " pod="calico-system/calico-typha-78cb5dc7dd-tqsqw" May 9 23:59:34.803592 kubelet[3155]: I0509 23:59:34.803508 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffvw\" (UniqueName: \"kubernetes.io/projected/7a684987-e4b0-40fc-9e57-1a51a1c59cb9-kube-api-access-tffvw\") pod \"calico-typha-78cb5dc7dd-tqsqw\" (UID: \"7a684987-e4b0-40fc-9e57-1a51a1c59cb9\") " pod="calico-system/calico-typha-78cb5dc7dd-tqsqw" May 9 23:59:34.986009 systemd[1]: Created slice kubepods-besteffort-podbac10bec_0ad2_471f_b205_fca60c9b8b1f.slice - libcontainer container kubepods-besteffort-podbac10bec_0ad2_471f_b205_fca60c9b8b1f.slice. May 9 23:59:35.004903 kubelet[3155]: I0509 23:59:35.004847 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bac10bec-0ad2-471f-b205-fca60c9b8b1f-node-certs\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.004903 kubelet[3155]: I0509 23:59:35.004903 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-policysync\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005144 kubelet[3155]: I0509 23:59:35.004922 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-var-lib-calico\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005144 kubelet[3155]: I0509 23:59:35.004941 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-cni-net-dir\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005144 kubelet[3155]: I0509 23:59:35.004959 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-var-run-calico\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005144 kubelet[3155]: I0509 23:59:35.004977 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-cni-log-dir\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005144 kubelet[3155]: I0509 23:59:35.004993 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-flexvol-driver-host\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005261 kubelet[3155]: I0509 23:59:35.005017 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-lib-modules\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005261 kubelet[3155]: I0509 23:59:35.005032 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-xtables-lock\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005261 kubelet[3155]: I0509 23:59:35.005051 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac10bec-0ad2-471f-b205-fca60c9b8b1f-tigera-ca-bundle\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005261 kubelet[3155]: I0509 23:59:35.005067 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nggqp\" (UniqueName: \"kubernetes.io/projected/bac10bec-0ad2-471f-b205-fca60c9b8b1f-kube-api-access-nggqp\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.005261 kubelet[3155]: I0509 23:59:35.005084 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bac10bec-0ad2-471f-b205-fca60c9b8b1f-cni-bin-dir\") pod \"calico-node-cdqhg\" (UID: \"bac10bec-0ad2-471f-b205-fca60c9b8b1f\") " pod="calico-system/calico-node-cdqhg" May 9 23:59:35.103163 containerd[1722]: time="2025-05-09T23:59:35.093589172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78cb5dc7dd-tqsqw,Uid:7a684987-e4b0-40fc-9e57-1a51a1c59cb9,Namespace:calico-system,Attempt:0,}" May 9 23:59:35.108584 kubelet[3155]: E0509 23:59:35.108377 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.108584 kubelet[3155]: W0509 23:59:35.108409 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.108584 kubelet[3155]: E0509 23:59:35.108448 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.109364 kubelet[3155]: E0509 23:59:35.108945 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.109364 kubelet[3155]: W0509 23:59:35.108961 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.109364 kubelet[3155]: E0509 23:59:35.109096 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.109713 kubelet[3155]: E0509 23:59:35.109480 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.109713 kubelet[3155]: W0509 23:59:35.109494 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.109963 kubelet[3155]: E0509 23:59:35.109867 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.109963 kubelet[3155]: W0509 23:59:35.109880 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.110218 kubelet[3155]: E0509 23:59:35.110182 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.110218 kubelet[3155]: W0509 23:59:35.110195 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.110560 kubelet[3155]: E0509 23:59:35.110472 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.110560 kubelet[3155]: W0509 23:59:35.110484 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.110903 kubelet[3155]: E0509 23:59:35.110822 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.110903 kubelet[3155]: W0509 23:59:35.110835 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.111225 kubelet[3155]: E0509 23:59:35.109549 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.111225 kubelet[3155]: E0509 23:59:35.111108 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.111225 kubelet[3155]: W0509 23:59:35.111119 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.111225 kubelet[3155]: E0509 23:59:35.111137 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.111225 kubelet[3155]: E0509 23:59:35.111151 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.111225 kubelet[3155]: E0509 23:59:35.111162 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.111225 kubelet[3155]: E0509 23:59:35.111173 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.111225 kubelet[3155]: E0509 23:59:35.111182 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.112432 kubelet[3155]: E0509 23:59:35.111784 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.112432 kubelet[3155]: W0509 23:59:35.111814 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.112432 kubelet[3155]: E0509 23:59:35.112355 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.112968 kubelet[3155]: E0509 23:59:35.112842 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.112968 kubelet[3155]: W0509 23:59:35.112861 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.112968 kubelet[3155]: E0509 23:59:35.112914 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.113329 kubelet[3155]: E0509 23:59:35.113208 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.113329 kubelet[3155]: W0509 23:59:35.113222 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.113329 kubelet[3155]: E0509 23:59:35.113250 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.113697 kubelet[3155]: E0509 23:59:35.113558 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.113697 kubelet[3155]: W0509 23:59:35.113573 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.113697 kubelet[3155]: E0509 23:59:35.113607 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.114184 kubelet[3155]: E0509 23:59:35.113997 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.114184 kubelet[3155]: W0509 23:59:35.114012 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.114184 kubelet[3155]: E0509 23:59:35.114056 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.114379 kubelet[3155]: E0509 23:59:35.114365 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.114457 kubelet[3155]: W0509 23:59:35.114443 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.114574 kubelet[3155]: E0509 23:59:35.114537 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.114914 kubelet[3155]: E0509 23:59:35.114864 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.114914 kubelet[3155]: W0509 23:59:35.114880 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.115052 kubelet[3155]: E0509 23:59:35.115025 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.115249 kubelet[3155]: E0509 23:59:35.115224 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.115367 kubelet[3155]: W0509 23:59:35.115250 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.115367 kubelet[3155]: E0509 23:59:35.115289 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.115547 kubelet[3155]: E0509 23:59:35.115504 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.115547 kubelet[3155]: W0509 23:59:35.115520 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.116025 kubelet[3155]: E0509 23:59:35.115712 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.116025 kubelet[3155]: E0509 23:59:35.115715 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.116025 kubelet[3155]: W0509 23:59:35.115723 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.116025 kubelet[3155]: E0509 23:59:35.115832 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.117670 kubelet[3155]: E0509 23:59:35.117443 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.117670 kubelet[3155]: W0509 23:59:35.117470 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.117786 kubelet[3155]: E0509 23:59:35.117703 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.117786 kubelet[3155]: W0509 23:59:35.117732 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.117895 kubelet[3155]: E0509 23:59:35.117873 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.117895 kubelet[3155]: W0509 23:59:35.117887 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.118041 kubelet[3155]: E0509 23:59:35.118021 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.118041 kubelet[3155]: W0509 23:59:35.118033 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.118203 kubelet[3155]: E0509 23:59:35.118180 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.118203 kubelet[3155]: W0509 23:59:35.118193 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.119665 kubelet[3155]: E0509 23:59:35.118341 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.119665 kubelet[3155]: W0509 23:59:35.118354 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.119665 kubelet[3155]: E0509 23:59:35.118369 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.120622 kubelet[3155]: E0509 23:59:35.120579 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.120935 kubelet[3155]: E0509 23:59:35.120798 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.120935 kubelet[3155]: E0509 23:59:35.120839 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.120935 kubelet[3155]: E0509 23:59:35.120858 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.120935 kubelet[3155]: E0509 23:59:35.120886 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.121069 kubelet[3155]: E0509 23:59:35.121048 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.121069 kubelet[3155]: W0509 23:59:35.121065 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.121221 kubelet[3155]: E0509 23:59:35.121169 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.126680 kubelet[3155]: E0509 23:59:35.122925 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.126680 kubelet[3155]: W0509 23:59:35.122968 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.126680 kubelet[3155]: E0509 23:59:35.123022 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.126680 kubelet[3155]: E0509 23:59:35.126640 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.126680 kubelet[3155]: W0509 23:59:35.126670 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.133619 kubelet[3155]: E0509 23:59:35.133173 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.135563 kubelet[3155]: E0509 23:59:35.134559 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.135563 kubelet[3155]: W0509 23:59:35.134583 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.135563 kubelet[3155]: E0509 23:59:35.134629 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.135563 kubelet[3155]: E0509 23:59:35.135084 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.135563 kubelet[3155]: W0509 23:59:35.135096 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.135563 kubelet[3155]: E0509 23:59:35.135155 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.135563 kubelet[3155]: E0509 23:59:35.135395 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.136092 kubelet[3155]: W0509 23:59:35.135677 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.136092 kubelet[3155]: E0509 23:59:35.135701 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.136259 kubelet[3155]: E0509 23:59:35.136204 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.136259 kubelet[3155]: W0509 23:59:35.136251 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.136259 kubelet[3155]: E0509 23:59:35.136266 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.174530 containerd[1722]: time="2025-05-09T23:59:35.173800233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:35.174530 containerd[1722]: time="2025-05-09T23:59:35.173880673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:35.174530 containerd[1722]: time="2025-05-09T23:59:35.173897593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:35.174530 containerd[1722]: time="2025-05-09T23:59:35.174001193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:35.201279 kubelet[3155]: E0509 23:59:35.200582 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sw5t" podUID="8ff719f3-efa6-438c-9acf-271732122094" May 9 23:59:35.207928 systemd[1]: Started cri-containerd-a61cbdbe8797a57ae8a16859be54b1c30a0347245a51cecfda0cfe2b02c0a85d.scope - libcontainer container a61cbdbe8797a57ae8a16859be54b1c30a0347245a51cecfda0cfe2b02c0a85d. May 9 23:59:35.267542 containerd[1722]: time="2025-05-09T23:59:35.266676811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78cb5dc7dd-tqsqw,Uid:7a684987-e4b0-40fc-9e57-1a51a1c59cb9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a61cbdbe8797a57ae8a16859be54b1c30a0347245a51cecfda0cfe2b02c0a85d\"" May 9 23:59:35.271853 containerd[1722]: time="2025-05-09T23:59:35.271748730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 9 23:59:35.290850 containerd[1722]: time="2025-05-09T23:59:35.290797606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdqhg,Uid:bac10bec-0ad2-471f-b205-fca60c9b8b1f,Namespace:calico-system,Attempt:0,}" May 9 23:59:35.301109 kubelet[3155]: E0509 23:59:35.300929 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.301109 kubelet[3155]: W0509 23:59:35.300960 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.304029 kubelet[3155]: E0509 23:59:35.301625 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.304583 kubelet[3155]: E0509 23:59:35.304555 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.304827 kubelet[3155]: W0509 23:59:35.304721 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.304827 kubelet[3155]: E0509 23:59:35.304805 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.305737 kubelet[3155]: E0509 23:59:35.305711 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.305967 kubelet[3155]: W0509 23:59:35.305855 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.305967 kubelet[3155]: E0509 23:59:35.305883 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.307045 kubelet[3155]: E0509 23:59:35.306740 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.307045 kubelet[3155]: W0509 23:59:35.306770 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.307045 kubelet[3155]: E0509 23:59:35.306801 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.307897 kubelet[3155]: E0509 23:59:35.307726 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.307897 kubelet[3155]: W0509 23:59:35.307746 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.307897 kubelet[3155]: E0509 23:59:35.307767 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.309244 kubelet[3155]: E0509 23:59:35.309159 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.309244 kubelet[3155]: W0509 23:59:35.309189 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.309244 kubelet[3155]: E0509 23:59:35.309208 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.309873 kubelet[3155]: E0509 23:59:35.309717 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.309873 kubelet[3155]: W0509 23:59:35.309732 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.309873 kubelet[3155]: E0509 23:59:35.309745 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.311269 kubelet[3155]: E0509 23:59:35.310722 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.311269 kubelet[3155]: W0509 23:59:35.310741 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.311269 kubelet[3155]: E0509 23:59:35.310755 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.311936 kubelet[3155]: E0509 23:59:35.311850 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.312139 kubelet[3155]: W0509 23:59:35.312042 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.312139 kubelet[3155]: E0509 23:59:35.312065 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.314896 kubelet[3155]: E0509 23:59:35.313828 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.314896 kubelet[3155]: W0509 23:59:35.313850 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.314896 kubelet[3155]: E0509 23:59:35.313868 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.315427 kubelet[3155]: E0509 23:59:35.315237 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.315427 kubelet[3155]: W0509 23:59:35.315257 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.315427 kubelet[3155]: E0509 23:59:35.315273 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.316876 kubelet[3155]: E0509 23:59:35.316850 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.317258 kubelet[3155]: W0509 23:59:35.316993 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.317258 kubelet[3155]: E0509 23:59:35.317017 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.320416 kubelet[3155]: E0509 23:59:35.320222 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.320416 kubelet[3155]: W0509 23:59:35.320250 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.320416 kubelet[3155]: E0509 23:59:35.320274 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.320771 kubelet[3155]: E0509 23:59:35.320753 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.320967 kubelet[3155]: W0509 23:59:35.320846 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.320967 kubelet[3155]: E0509 23:59:35.320866 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.321574 kubelet[3155]: E0509 23:59:35.321553 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.321729 kubelet[3155]: W0509 23:59:35.321685 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.322235 kubelet[3155]: E0509 23:59:35.321884 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.323123 kubelet[3155]: E0509 23:59:35.323048 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.323123 kubelet[3155]: W0509 23:59:35.323066 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.323799 kubelet[3155]: E0509 23:59:35.323198 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.324326 kubelet[3155]: E0509 23:59:35.324299 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.324852 kubelet[3155]: W0509 23:59:35.324702 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.324852 kubelet[3155]: E0509 23:59:35.324729 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.325107 kubelet[3155]: E0509 23:59:35.325068 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.325330 kubelet[3155]: W0509 23:59:35.325244 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.325614 kubelet[3155]: E0509 23:59:35.325430 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.326032 kubelet[3155]: E0509 23:59:35.325996 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.326246 kubelet[3155]: W0509 23:59:35.326119 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.326246 kubelet[3155]: E0509 23:59:35.326147 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.326589 kubelet[3155]: E0509 23:59:35.326556 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.327735 kubelet[3155]: W0509 23:59:35.327114 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.327735 kubelet[3155]: E0509 23:59:35.327160 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.330295 kubelet[3155]: E0509 23:59:35.330075 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.330295 kubelet[3155]: W0509 23:59:35.330109 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.330295 kubelet[3155]: E0509 23:59:35.330130 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.330295 kubelet[3155]: I0509 23:59:35.330168 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ff719f3-efa6-438c-9acf-271732122094-registration-dir\") pod \"csi-node-driver-7sw5t\" (UID: \"8ff719f3-efa6-438c-9acf-271732122094\") " pod="calico-system/csi-node-driver-7sw5t" May 9 23:59:35.332092 kubelet[3155]: E0509 23:59:35.332032 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.332092 kubelet[3155]: W0509 23:59:35.332074 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.332092 kubelet[3155]: E0509 23:59:35.332099 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.332694 kubelet[3155]: I0509 23:59:35.332136 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ff719f3-efa6-438c-9acf-271732122094-kubelet-dir\") pod \"csi-node-driver-7sw5t\" (UID: \"8ff719f3-efa6-438c-9acf-271732122094\") " pod="calico-system/csi-node-driver-7sw5t" May 9 23:59:35.334671 kubelet[3155]: E0509 23:59:35.334127 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.334671 kubelet[3155]: W0509 23:59:35.334180 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.334671 kubelet[3155]: E0509 23:59:35.334221 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.334671 kubelet[3155]: I0509 23:59:35.334253 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ff719f3-efa6-438c-9acf-271732122094-varrun\") pod \"csi-node-driver-7sw5t\" (UID: \"8ff719f3-efa6-438c-9acf-271732122094\") " pod="calico-system/csi-node-driver-7sw5t" May 9 23:59:35.336679 kubelet[3155]: E0509 23:59:35.335255 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.336679 kubelet[3155]: W0509 23:59:35.335281 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.336679 kubelet[3155]: E0509 23:59:35.335300 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.336679 kubelet[3155]: I0509 23:59:35.335336 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ccvv\" (UniqueName: \"kubernetes.io/projected/8ff719f3-efa6-438c-9acf-271732122094-kube-api-access-7ccvv\") pod \"csi-node-driver-7sw5t\" (UID: \"8ff719f3-efa6-438c-9acf-271732122094\") " pod="calico-system/csi-node-driver-7sw5t" May 9 23:59:35.336679 kubelet[3155]: E0509 23:59:35.336012 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.336679 kubelet[3155]: W0509 23:59:35.336026 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.336679 kubelet[3155]: E0509 23:59:35.336039 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.336679 kubelet[3155]: I0509 23:59:35.336059 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ff719f3-efa6-438c-9acf-271732122094-socket-dir\") pod \"csi-node-driver-7sw5t\" (UID: \"8ff719f3-efa6-438c-9acf-271732122094\") " pod="calico-system/csi-node-driver-7sw5t" May 9 23:59:35.336960 kubelet[3155]: E0509 23:59:35.336909 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.336960 kubelet[3155]: W0509 23:59:35.336933 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.336960 kubelet[3155]: E0509 23:59:35.336948 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.337708 kubelet[3155]: E0509 23:59:35.337440 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.337708 kubelet[3155]: W0509 23:59:35.337466 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.337708 kubelet[3155]: E0509 23:59:35.337479 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.338717 kubelet[3155]: E0509 23:59:35.338614 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.338717 kubelet[3155]: W0509 23:59:35.338668 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.338717 kubelet[3155]: E0509 23:59:35.338684 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.339054 kubelet[3155]: E0509 23:59:35.338850 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.339054 kubelet[3155]: W0509 23:59:35.338865 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.339054 kubelet[3155]: E0509 23:59:35.338875 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.339802 kubelet[3155]: E0509 23:59:35.339772 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.339802 kubelet[3155]: W0509 23:59:35.339795 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.339904 kubelet[3155]: E0509 23:59:35.339814 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.341475 kubelet[3155]: E0509 23:59:35.339994 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.341475 kubelet[3155]: W0509 23:59:35.340010 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.341475 kubelet[3155]: E0509 23:59:35.340022 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.343006 kubelet[3155]: E0509 23:59:35.342767 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.343006 kubelet[3155]: W0509 23:59:35.342795 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.343006 kubelet[3155]: E0509 23:59:35.342829 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.343162 kubelet[3155]: E0509 23:59:35.343016 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.343162 kubelet[3155]: W0509 23:59:35.343025 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.343162 kubelet[3155]: E0509 23:59:35.343040 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.343800 kubelet[3155]: E0509 23:59:35.343202 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.343800 kubelet[3155]: W0509 23:59:35.343215 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.343800 kubelet[3155]: E0509 23:59:35.343224 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.343800 kubelet[3155]: E0509 23:59:35.343516 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.343800 kubelet[3155]: W0509 23:59:35.343533 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.343800 kubelet[3155]: E0509 23:59:35.343546 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.362563 containerd[1722]: time="2025-05-09T23:59:35.362291229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:35.363370 containerd[1722]: time="2025-05-09T23:59:35.362738509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:35.364005 containerd[1722]: time="2025-05-09T23:59:35.363211269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:35.364879 containerd[1722]: time="2025-05-09T23:59:35.364764148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:35.388629 systemd[1]: Started cri-containerd-d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502.scope - libcontainer container d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502. May 9 23:59:35.413405 containerd[1722]: time="2025-05-09T23:59:35.413121657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdqhg,Uid:bac10bec-0ad2-471f-b205-fca60c9b8b1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502\"" May 9 23:59:35.437361 kubelet[3155]: E0509 23:59:35.437192 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.437361 kubelet[3155]: W0509 23:59:35.437219 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.437361 kubelet[3155]: E0509 23:59:35.437241 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.437747 kubelet[3155]: E0509 23:59:35.437718 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.437898 kubelet[3155]: W0509 23:59:35.437799 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.437898 kubelet[3155]: E0509 23:59:35.437839 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.438157 kubelet[3155]: E0509 23:59:35.438130 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.438157 kubelet[3155]: W0509 23:59:35.438152 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.438272 kubelet[3155]: E0509 23:59:35.438176 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.438447 kubelet[3155]: E0509 23:59:35.438428 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.438447 kubelet[3155]: W0509 23:59:35.438443 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.438545 kubelet[3155]: E0509 23:59:35.438462 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.438721 kubelet[3155]: E0509 23:59:35.438680 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.438721 kubelet[3155]: W0509 23:59:35.438694 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.438721 kubelet[3155]: E0509 23:59:35.438712 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.438951 kubelet[3155]: E0509 23:59:35.438928 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.438951 kubelet[3155]: W0509 23:59:35.438944 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.439037 kubelet[3155]: E0509 23:59:35.438961 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.439155 kubelet[3155]: E0509 23:59:35.439133 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.439155 kubelet[3155]: W0509 23:59:35.439148 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.439155 kubelet[3155]: E0509 23:59:35.439167 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.439495 kubelet[3155]: E0509 23:59:35.439478 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.439578 kubelet[3155]: W0509 23:59:35.439563 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.439739 kubelet[3155]: E0509 23:59:35.439678 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.440163 kubelet[3155]: E0509 23:59:35.440016 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.440163 kubelet[3155]: W0509 23:59:35.440032 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.440163 kubelet[3155]: E0509 23:59:35.440065 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.440368 kubelet[3155]: E0509 23:59:35.440354 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.440504 kubelet[3155]: W0509 23:59:35.440430 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.440504 kubelet[3155]: E0509 23:59:35.440463 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.440751 kubelet[3155]: E0509 23:59:35.440723 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.440751 kubelet[3155]: W0509 23:59:35.440743 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.440751 kubelet[3155]: E0509 23:59:35.440764 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.441192 kubelet[3155]: E0509 23:59:35.441162 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.441350 kubelet[3155]: W0509 23:59:35.441273 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.441350 kubelet[3155]: E0509 23:59:35.441310 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.441786 kubelet[3155]: E0509 23:59:35.441632 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.441786 kubelet[3155]: W0509 23:59:35.441674 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.441786 kubelet[3155]: E0509 23:59:35.441701 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.441996 kubelet[3155]: E0509 23:59:35.441982 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.442150 kubelet[3155]: W0509 23:59:35.442057 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.442150 kubelet[3155]: E0509 23:59:35.442100 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.442432 kubelet[3155]: E0509 23:59:35.442417 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.442574 kubelet[3155]: W0509 23:59:35.442502 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.442574 kubelet[3155]: E0509 23:59:35.442544 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.442937 kubelet[3155]: E0509 23:59:35.442868 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.442937 kubelet[3155]: W0509 23:59:35.442884 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.442937 kubelet[3155]: E0509 23:59:35.442916 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.443365 kubelet[3155]: E0509 23:59:35.443253 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.443365 kubelet[3155]: W0509 23:59:35.443269 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.443365 kubelet[3155]: E0509 23:59:35.443304 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.443774 kubelet[3155]: E0509 23:59:35.443679 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.443774 kubelet[3155]: W0509 23:59:35.443700 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.443774 kubelet[3155]: E0509 23:59:35.443729 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.445665 kubelet[3155]: E0509 23:59:35.445472 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.445665 kubelet[3155]: W0509 23:59:35.445500 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.445665 kubelet[3155]: E0509 23:59:35.445537 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.446260 kubelet[3155]: E0509 23:59:35.446111 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.446260 kubelet[3155]: W0509 23:59:35.446141 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.446260 kubelet[3155]: E0509 23:59:35.446184 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.446474 kubelet[3155]: E0509 23:59:35.446442 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.446474 kubelet[3155]: W0509 23:59:35.446457 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.446749 kubelet[3155]: E0509 23:59:35.446570 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.446972 kubelet[3155]: E0509 23:59:35.446955 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.447250 kubelet[3155]: W0509 23:59:35.447057 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.447250 kubelet[3155]: E0509 23:59:35.447087 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.448364 kubelet[3155]: E0509 23:59:35.448254 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.448933 kubelet[3155]: W0509 23:59:35.448701 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.448933 kubelet[3155]: E0509 23:59:35.448754 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.449827 kubelet[3155]: E0509 23:59:35.449502 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.449827 kubelet[3155]: W0509 23:59:35.449520 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.449827 kubelet[3155]: E0509 23:59:35.449538 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.450790 kubelet[3155]: E0509 23:59:35.450765 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.451137 kubelet[3155]: W0509 23:59:35.450982 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.451137 kubelet[3155]: E0509 23:59:35.451011 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:35.458621 kubelet[3155]: E0509 23:59:35.458579 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:35.458865 kubelet[3155]: W0509 23:59:35.458702 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:35.458865 kubelet[3155]: E0509 23:59:35.458733 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:36.911438 containerd[1722]: time="2025-05-09T23:59:36.911386426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:36.915332 containerd[1722]: time="2025-05-09T23:59:36.915139505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 9 23:59:36.920173 containerd[1722]: time="2025-05-09T23:59:36.919836984Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:36.924390 containerd[1722]: time="2025-05-09T23:59:36.924341143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:36.925090 containerd[1722]: time="2025-05-09T23:59:36.925048583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.653254173s" May 9 23:59:36.925090 containerd[1722]: time="2025-05-09T23:59:36.925089263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 9 23:59:36.926672 containerd[1722]: time="2025-05-09T23:59:36.926611182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 23:59:36.941951 containerd[1722]: time="2025-05-09T23:59:36.941898739Z" level=info msg="CreateContainer within sandbox \"a61cbdbe8797a57ae8a16859be54b1c30a0347245a51cecfda0cfe2b02c0a85d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 9 23:59:36.992864 containerd[1722]: time="2025-05-09T23:59:36.992804687Z" level=info msg="CreateContainer within sandbox \"a61cbdbe8797a57ae8a16859be54b1c30a0347245a51cecfda0cfe2b02c0a85d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"16542c74aa9b71139c900f22be0a3747fb45edc7b5e47f4c1592877292cb301a\"" May 9 23:59:36.995753 containerd[1722]: time="2025-05-09T23:59:36.994578806Z" level=info msg="StartContainer for \"16542c74aa9b71139c900f22be0a3747fb45edc7b5e47f4c1592877292cb301a\"" May 9 23:59:37.027861 systemd[1]: Started cri-containerd-16542c74aa9b71139c900f22be0a3747fb45edc7b5e47f4c1592877292cb301a.scope - libcontainer container 16542c74aa9b71139c900f22be0a3747fb45edc7b5e47f4c1592877292cb301a. May 9 23:59:37.074117 containerd[1722]: time="2025-05-09T23:59:37.073991828Z" level=info msg="StartContainer for \"16542c74aa9b71139c900f22be0a3747fb45edc7b5e47f4c1592877292cb301a\" returns successfully" May 9 23:59:37.224695 kubelet[3155]: E0509 23:59:37.223874 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sw5t" podUID="8ff719f3-efa6-438c-9acf-271732122094" May 9 23:59:37.383381 kubelet[3155]: I0509 23:59:37.383293 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78cb5dc7dd-tqsqw" podStartSLOduration=1.726691704 podStartE2EDuration="3.383275795s" podCreationTimestamp="2025-05-09 23:59:34 +0000 UTC" firstStartedPulling="2025-05-09 23:59:35.269684451 +0000 UTC m=+14.155587031" lastFinishedPulling="2025-05-09 23:59:36.926268582 +0000 UTC m=+15.812171122" observedRunningTime="2025-05-09 23:59:37.381947276 +0000 UTC m=+16.267849856" watchObservedRunningTime="2025-05-09 23:59:37.383275795 +0000 UTC m=+16.269178375" May 9 23:59:37.444697 kubelet[3155]: E0509 23:59:37.444526 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.444697 kubelet[3155]: W0509 23:59:37.444566 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.444697 kubelet[3155]: E0509 23:59:37.444590 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.445295 kubelet[3155]: E0509 23:59:37.445121 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.445295 kubelet[3155]: W0509 23:59:37.445137 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.445295 kubelet[3155]: E0509 23:59:37.445189 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.445619 kubelet[3155]: E0509 23:59:37.445517 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.445619 kubelet[3155]: W0509 23:59:37.445530 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.445619 kubelet[3155]: E0509 23:59:37.445541 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.446026 kubelet[3155]: E0509 23:59:37.445865 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.446026 kubelet[3155]: W0509 23:59:37.445878 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.446026 kubelet[3155]: E0509 23:59:37.445889 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.446263 kubelet[3155]: E0509 23:59:37.446172 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.446263 kubelet[3155]: W0509 23:59:37.446184 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.446263 kubelet[3155]: E0509 23:59:37.446194 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.446556 kubelet[3155]: E0509 23:59:37.446462 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.446556 kubelet[3155]: W0509 23:59:37.446474 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.446556 kubelet[3155]: E0509 23:59:37.446484 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.446918 kubelet[3155]: E0509 23:59:37.446904 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.447068 kubelet[3155]: W0509 23:59:37.446971 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.447068 kubelet[3155]: E0509 23:59:37.446986 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.447262 kubelet[3155]: E0509 23:59:37.447249 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.447424 kubelet[3155]: W0509 23:59:37.447317 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.447424 kubelet[3155]: E0509 23:59:37.447332 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.447620 kubelet[3155]: E0509 23:59:37.447607 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.447836 kubelet[3155]: W0509 23:59:37.447734 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.447836 kubelet[3155]: E0509 23:59:37.447756 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.448119 kubelet[3155]: E0509 23:59:37.448026 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.448119 kubelet[3155]: W0509 23:59:37.448040 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.448119 kubelet[3155]: E0509 23:59:37.448050 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.448435 kubelet[3155]: E0509 23:59:37.448339 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.448435 kubelet[3155]: W0509 23:59:37.448351 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.448435 kubelet[3155]: E0509 23:59:37.448361 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.448938 kubelet[3155]: E0509 23:59:37.448810 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.448938 kubelet[3155]: W0509 23:59:37.448825 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.448938 kubelet[3155]: E0509 23:59:37.448843 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.449276 kubelet[3155]: E0509 23:59:37.449162 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.449276 kubelet[3155]: W0509 23:59:37.449174 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.449276 kubelet[3155]: E0509 23:59:37.449184 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.449714 kubelet[3155]: E0509 23:59:37.449514 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.449714 kubelet[3155]: W0509 23:59:37.449527 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.449714 kubelet[3155]: E0509 23:59:37.449538 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.449955 kubelet[3155]: E0509 23:59:37.449871 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.449955 kubelet[3155]: W0509 23:59:37.449885 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.449955 kubelet[3155]: E0509 23:59:37.449895 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.458748 kubelet[3155]: E0509 23:59:37.458690 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.458748 kubelet[3155]: W0509 23:59:37.458734 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.459040 kubelet[3155]: E0509 23:59:37.458759 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.459075 kubelet[3155]: E0509 23:59:37.459061 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.459310 kubelet[3155]: W0509 23:59:37.459100 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.459310 kubelet[3155]: E0509 23:59:37.459118 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.459310 kubelet[3155]: E0509 23:59:37.459322 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.459661 kubelet[3155]: W0509 23:59:37.459331 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.459661 kubelet[3155]: E0509 23:59:37.459349 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.460244 kubelet[3155]: E0509 23:59:37.460038 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.460244 kubelet[3155]: W0509 23:59:37.460062 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.460244 kubelet[3155]: E0509 23:59:37.460089 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.460965 kubelet[3155]: E0509 23:59:37.460720 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.460965 kubelet[3155]: W0509 23:59:37.460737 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.460965 kubelet[3155]: E0509 23:59:37.460759 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.461672 kubelet[3155]: E0509 23:59:37.461339 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.461672 kubelet[3155]: W0509 23:59:37.461357 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.461672 kubelet[3155]: E0509 23:59:37.461393 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.461829 kubelet[3155]: E0509 23:59:37.461710 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.461829 kubelet[3155]: W0509 23:59:37.461725 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.462065 kubelet[3155]: E0509 23:59:37.461895 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.462322 kubelet[3155]: E0509 23:59:37.462296 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.462374 kubelet[3155]: W0509 23:59:37.462320 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.462575 kubelet[3155]: E0509 23:59:37.462513 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.463043 kubelet[3155]: E0509 23:59:37.463017 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.463043 kubelet[3155]: W0509 23:59:37.463040 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.463433 kubelet[3155]: E0509 23:59:37.463120 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.463828 kubelet[3155]: E0509 23:59:37.463808 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.464030 kubelet[3155]: W0509 23:59:37.463931 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.464122 kubelet[3155]: E0509 23:59:37.464104 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.464426 kubelet[3155]: E0509 23:59:37.464412 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.464555 kubelet[3155]: W0509 23:59:37.464496 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.464707 kubelet[3155]: E0509 23:59:37.464596 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.464997 kubelet[3155]: E0509 23:59:37.464933 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.464997 kubelet[3155]: W0509 23:59:37.464948 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.465190 kubelet[3155]: E0509 23:59:37.465065 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.465802 kubelet[3155]: E0509 23:59:37.465432 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.465802 kubelet[3155]: W0509 23:59:37.465707 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.465802 kubelet[3155]: E0509 23:59:37.465742 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.466398 kubelet[3155]: E0509 23:59:37.466176 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.466398 kubelet[3155]: W0509 23:59:37.466191 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.466921 kubelet[3155]: E0509 23:59:37.466695 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.466921 kubelet[3155]: E0509 23:59:37.466769 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.466921 kubelet[3155]: W0509 23:59:37.466783 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.466921 kubelet[3155]: E0509 23:59:37.466796 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.468881 kubelet[3155]: E0509 23:59:37.468609 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.468881 kubelet[3155]: W0509 23:59:37.468631 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.468881 kubelet[3155]: E0509 23:59:37.468685 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.469203 kubelet[3155]: E0509 23:59:37.469176 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.469203 kubelet[3155]: W0509 23:59:37.469196 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.469287 kubelet[3155]: E0509 23:59:37.469216 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:37.469621 kubelet[3155]: E0509 23:59:37.469541 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 23:59:37.469621 kubelet[3155]: W0509 23:59:37.469572 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 23:59:37.469621 kubelet[3155]: E0509 23:59:37.469585 3155 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 23:59:38.168609 containerd[1722]: time="2025-05-09T23:59:38.167880612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:38.170898 containerd[1722]: time="2025-05-09T23:59:38.170835491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 9 23:59:38.178569 containerd[1722]: time="2025-05-09T23:59:38.178517889Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:38.184760 containerd[1722]: time="2025-05-09T23:59:38.184699048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:38.185964 containerd[1722]: time="2025-05-09T23:59:38.185388168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.258715306s" May 9 23:59:38.185964 containerd[1722]: time="2025-05-09T23:59:38.185429528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 9 23:59:38.188154 containerd[1722]: time="2025-05-09T23:59:38.188093327Z" level=info msg="CreateContainer within sandbox \"d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 23:59:38.247342 containerd[1722]: time="2025-05-09T23:59:38.247204953Z" level=info msg="CreateContainer within sandbox \"d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039\"" May 9 23:59:38.250733 containerd[1722]: time="2025-05-09T23:59:38.249631593Z" level=info msg="StartContainer for \"aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039\"" May 9 23:59:38.282898 systemd[1]: Started cri-containerd-aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039.scope - libcontainer container aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039. May 9 23:59:38.319090 containerd[1722]: time="2025-05-09T23:59:38.319034336Z" level=info msg="StartContainer for \"aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039\" returns successfully" May 9 23:59:38.344825 systemd[1]: cri-containerd-aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039.scope: Deactivated successfully. May 9 23:59:38.369537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039-rootfs.mount: Deactivated successfully. May 9 23:59:38.373046 kubelet[3155]: I0509 23:59:38.372188 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 23:59:39.205101 containerd[1722]: time="2025-05-09T23:59:39.205023529Z" level=info msg="shim disconnected" id=aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039 namespace=k8s.io May 9 23:59:39.205831 containerd[1722]: time="2025-05-09T23:59:39.205581089Z" level=warning msg="cleaning up after shim disconnected" id=aa88bd333c44e23a5cd49d40ad9e31fec756710b589ecefa42cf76b14647d039 namespace=k8s.io May 9 23:59:39.205831 containerd[1722]: time="2025-05-09T23:59:39.205604769Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:39.223730 kubelet[3155]: E0509 23:59:39.222953 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sw5t" podUID="8ff719f3-efa6-438c-9acf-271732122094" May 9 23:59:39.378543 containerd[1722]: time="2025-05-09T23:59:39.378495248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 23:59:41.224174 kubelet[3155]: E0509 23:59:41.223183 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sw5t" podUID="8ff719f3-efa6-438c-9acf-271732122094" May 9 23:59:42.475896 containerd[1722]: time="2025-05-09T23:59:42.475833524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:42.479841 containerd[1722]: time="2025-05-09T23:59:42.479780003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 9 23:59:42.483967 containerd[1722]: time="2025-05-09T23:59:42.483880522Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:42.491178 containerd[1722]: time="2025-05-09T23:59:42.490906040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:42.491875 containerd[1722]: time="2025-05-09T23:59:42.491542840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.112993472s" May 9 23:59:42.491875 containerd[1722]: time="2025-05-09T23:59:42.491586280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 9 23:59:42.497778 containerd[1722]: time="2025-05-09T23:59:42.497693278Z" level=info msg="CreateContainer within sandbox \"d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 23:59:42.568746 containerd[1722]: time="2025-05-09T23:59:42.568661902Z" level=info msg="CreateContainer within sandbox \"d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa\"" May 9 23:59:42.570551 containerd[1722]: time="2025-05-09T23:59:42.570032062Z" level=info msg="StartContainer for \"f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa\"" May 9 23:59:42.614907 systemd[1]: Started cri-containerd-f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa.scope - libcontainer container f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa. May 9 23:59:42.649120 containerd[1722]: time="2025-05-09T23:59:42.648793843Z" level=info msg="StartContainer for \"f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa\" returns successfully" May 9 23:59:43.224095 kubelet[3155]: E0509 23:59:43.222715 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sw5t" podUID="8ff719f3-efa6-438c-9acf-271732122094" May 9 23:59:43.743766 containerd[1722]: time="2025-05-09T23:59:43.743712067Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:59:43.748684 kubelet[3155]: I0509 23:59:43.748238 3155 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 23:59:43.751992 systemd[1]: cri-containerd-f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa.scope: Deactivated successfully. May 9 23:59:43.786099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa-rootfs.mount: Deactivated successfully. May 9 23:59:43.805631 kubelet[3155]: I0509 23:59:43.804782 3155 status_manager.go:890] "Failed to get status for pod" podUID="bbe36fb6-41bc-4acd-b726-7ca42d2518b3" pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" err="pods \"calico-apiserver-5dc459848f-csr7s\" is forbidden: User \"system:node:ci-4081.3.3-n-84ab9604c4\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object" May 9 23:59:44.066975 kubelet[3155]: W0509 23:59:43.807468 3155 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.3.3-n-84ab9604c4" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object May 9 23:59:44.066975 kubelet[3155]: E0509 23:59:43.807513 3155 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081.3.3-n-84ab9604c4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object" logger="UnhandledError" May 9 23:59:44.066975 kubelet[3155]: W0509 23:59:43.807566 3155 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.3-n-84ab9604c4" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object May 9 23:59:44.066975 kubelet[3155]: E0509 23:59:43.807579 3155 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.3-n-84ab9604c4\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object" logger="UnhandledError" May 9 23:59:44.066975 kubelet[3155]: W0509 23:59:43.807621 3155 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-n-84ab9604c4" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object May 9 23:59:43.814267 systemd[1]: Created slice kubepods-besteffort-podbbe36fb6_41bc_4acd_b726_7ca42d2518b3.slice - libcontainer container kubepods-besteffort-podbbe36fb6_41bc_4acd_b726_7ca42d2518b3.slice. May 9 23:59:44.067264 kubelet[3155]: E0509 23:59:43.807660 3155 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.3-n-84ab9604c4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object" logger="UnhandledError" May 9 23:59:44.067264 kubelet[3155]: I0509 23:59:43.812358 3155 status_manager.go:890] "Failed to get status for pod" podUID="c6108b02-0f6c-4322-bab5-8d4e33e79daf" pod="kube-system/coredns-668d6bf9bc-nrnrs" err="pods \"coredns-668d6bf9bc-nrnrs\" is forbidden: User \"system:node:ci-4081.3.3-n-84ab9604c4\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.3-n-84ab9604c4' and this object" May 9 23:59:44.067264 kubelet[3155]: I0509 23:59:43.907798 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmz88\" (UniqueName: \"kubernetes.io/projected/b6350e5e-ccbb-494f-aa01-4897633a5f14-kube-api-access-mmz88\") pod \"coredns-668d6bf9bc-55grc\" (UID: \"b6350e5e-ccbb-494f-aa01-4897633a5f14\") " pod="kube-system/coredns-668d6bf9bc-55grc" May 9 23:59:44.067264 kubelet[3155]: I0509 23:59:43.907858 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95x2l\" (UniqueName: \"kubernetes.io/projected/c6108b02-0f6c-4322-bab5-8d4e33e79daf-kube-api-access-95x2l\") pod \"coredns-668d6bf9bc-nrnrs\" (UID: \"c6108b02-0f6c-4322-bab5-8d4e33e79daf\") " pod="kube-system/coredns-668d6bf9bc-nrnrs" May 9 23:59:44.067264 kubelet[3155]: I0509 23:59:43.907897 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6350e5e-ccbb-494f-aa01-4897633a5f14-config-volume\") pod \"coredns-668d6bf9bc-55grc\" (UID: \"b6350e5e-ccbb-494f-aa01-4897633a5f14\") " pod="kube-system/coredns-668d6bf9bc-55grc" May 9 23:59:43.832189 systemd[1]: Created slice kubepods-burstable-podc6108b02_0f6c_4322_bab5_8d4e33e79daf.slice - libcontainer container kubepods-burstable-podc6108b02_0f6c_4322_bab5_8d4e33e79daf.slice. May 9 23:59:44.067424 kubelet[3155]: I0509 23:59:43.907918 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhkzf\" (UniqueName: \"kubernetes.io/projected/1563bd12-d559-40ca-9df7-724c26d2d3e7-kube-api-access-zhkzf\") pod \"calico-kube-controllers-9f97fddfb-hvskp\" (UID: \"1563bd12-d559-40ca-9df7-724c26d2d3e7\") " pod="calico-system/calico-kube-controllers-9f97fddfb-hvskp" May 9 23:59:44.067424 kubelet[3155]: I0509 23:59:43.907944 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a08f28a-435c-4180-8af5-1b9c1f60f93b-calico-apiserver-certs\") pod \"calico-apiserver-5dc459848f-p7gmj\" (UID: \"8a08f28a-435c-4180-8af5-1b9c1f60f93b\") " pod="calico-apiserver/calico-apiserver-5dc459848f-p7gmj" May 9 23:59:44.067424 kubelet[3155]: I0509 23:59:43.908061 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-967g5\" (UniqueName: \"kubernetes.io/projected/8a08f28a-435c-4180-8af5-1b9c1f60f93b-kube-api-access-967g5\") pod \"calico-apiserver-5dc459848f-p7gmj\" (UID: \"8a08f28a-435c-4180-8af5-1b9c1f60f93b\") " pod="calico-apiserver/calico-apiserver-5dc459848f-p7gmj" May 9 23:59:44.067424 kubelet[3155]: I0509 23:59:43.908110 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxg62\" (UniqueName: \"kubernetes.io/projected/bbe36fb6-41bc-4acd-b726-7ca42d2518b3-kube-api-access-sxg62\") pod \"calico-apiserver-5dc459848f-csr7s\" (UID: \"bbe36fb6-41bc-4acd-b726-7ca42d2518b3\") " pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" May 9 23:59:44.067424 kubelet[3155]: I0509 23:59:43.908132 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6108b02-0f6c-4322-bab5-8d4e33e79daf-config-volume\") pod \"coredns-668d6bf9bc-nrnrs\" (UID: \"c6108b02-0f6c-4322-bab5-8d4e33e79daf\") " pod="kube-system/coredns-668d6bf9bc-nrnrs" May 9 23:59:43.844358 systemd[1]: Created slice kubepods-besteffort-pod8a08f28a_435c_4180_8af5_1b9c1f60f93b.slice - libcontainer container kubepods-besteffort-pod8a08f28a_435c_4180_8af5_1b9c1f60f93b.slice. May 9 23:59:44.067623 kubelet[3155]: I0509 23:59:43.908164 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1563bd12-d559-40ca-9df7-724c26d2d3e7-tigera-ca-bundle\") pod \"calico-kube-controllers-9f97fddfb-hvskp\" (UID: \"1563bd12-d559-40ca-9df7-724c26d2d3e7\") " pod="calico-system/calico-kube-controllers-9f97fddfb-hvskp" May 9 23:59:44.067623 kubelet[3155]: I0509 23:59:43.908195 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbe36fb6-41bc-4acd-b726-7ca42d2518b3-calico-apiserver-certs\") pod \"calico-apiserver-5dc459848f-csr7s\" (UID: \"bbe36fb6-41bc-4acd-b726-7ca42d2518b3\") " pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" May 9 23:59:43.852198 systemd[1]: Created slice kubepods-besteffort-pod1563bd12_d559_40ca_9df7_724c26d2d3e7.slice - libcontainer container kubepods-besteffort-pod1563bd12_d559_40ca_9df7_724c26d2d3e7.slice. May 9 23:59:43.861983 systemd[1]: Created slice kubepods-burstable-podb6350e5e_ccbb_494f_aa01_4897633a5f14.slice - libcontainer container kubepods-burstable-podb6350e5e_ccbb_494f_aa01_4897633a5f14.slice. May 9 23:59:44.372395 containerd[1722]: time="2025-05-09T23:59:44.372263477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f97fddfb-hvskp,Uid:1563bd12-d559-40ca-9df7-724c26d2d3e7,Namespace:calico-system,Attempt:0,}" May 9 23:59:44.938333 containerd[1722]: time="2025-05-09T23:59:44.938261016Z" level=info msg="shim disconnected" id=f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa namespace=k8s.io May 9 23:59:44.938333 containerd[1722]: time="2025-05-09T23:59:44.938333536Z" level=warning msg="cleaning up after shim disconnected" id=f07b03e9d054195a0bede2419cdeb1269f3cfdfc11f9f8c81a9607b22dc2b2fa namespace=k8s.io May 9 23:59:44.938333 containerd[1722]: time="2025-05-09T23:59:44.938343496Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:45.010525 kubelet[3155]: E0509 23:59:45.010471 3155 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 9 23:59:45.011271 kubelet[3155]: E0509 23:59:45.010816 3155 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 9 23:59:45.011271 kubelet[3155]: E0509 23:59:45.010892 3155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a08f28a-435c-4180-8af5-1b9c1f60f93b-calico-apiserver-certs podName:8a08f28a-435c-4180-8af5-1b9c1f60f93b nodeName:}" failed. No retries permitted until 2025-05-09 23:59:45.510869198 +0000 UTC m=+24.396771738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8a08f28a-435c-4180-8af5-1b9c1f60f93b-calico-apiserver-certs") pod "calico-apiserver-5dc459848f-p7gmj" (UID: "8a08f28a-435c-4180-8af5-1b9c1f60f93b") : failed to sync secret cache: timed out waiting for the condition May 9 23:59:45.011804 kubelet[3155]: E0509 23:59:45.011727 3155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbe36fb6-41bc-4acd-b726-7ca42d2518b3-calico-apiserver-certs podName:bbe36fb6-41bc-4acd-b726-7ca42d2518b3 nodeName:}" failed. No retries permitted until 2025-05-09 23:59:45.511699598 +0000 UTC m=+24.397602178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/bbe36fb6-41bc-4acd-b726-7ca42d2518b3-calico-apiserver-certs") pod "calico-apiserver-5dc459848f-csr7s" (UID: "bbe36fb6-41bc-4acd-b726-7ca42d2518b3") : failed to sync secret cache: timed out waiting for the condition May 9 23:59:45.018554 kubelet[3155]: E0509 23:59:45.018406 3155 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 9 23:59:45.018554 kubelet[3155]: E0509 23:59:45.018457 3155 projected.go:194] Error preparing data for projected volume kube-api-access-sxg62 for pod calico-apiserver/calico-apiserver-5dc459848f-csr7s: failed to sync configmap cache: timed out waiting for the condition May 9 23:59:45.018554 kubelet[3155]: E0509 23:59:45.018525 3155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bbe36fb6-41bc-4acd-b726-7ca42d2518b3-kube-api-access-sxg62 podName:bbe36fb6-41bc-4acd-b726-7ca42d2518b3 nodeName:}" failed. No retries permitted until 2025-05-09 23:59:45.518505996 +0000 UTC m=+24.404408576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sxg62" (UniqueName: "kubernetes.io/projected/bbe36fb6-41bc-4acd-b726-7ca42d2518b3-kube-api-access-sxg62") pod "calico-apiserver-5dc459848f-csr7s" (UID: "bbe36fb6-41bc-4acd-b726-7ca42d2518b3") : failed to sync configmap cache: timed out waiting for the condition May 9 23:59:45.025209 kubelet[3155]: E0509 23:59:45.025166 3155 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 9 23:59:45.025209 kubelet[3155]: E0509 23:59:45.025206 3155 projected.go:194] Error preparing data for projected volume kube-api-access-967g5 for pod calico-apiserver/calico-apiserver-5dc459848f-p7gmj: failed to sync configmap cache: timed out waiting for the condition May 9 23:59:45.025403 kubelet[3155]: E0509 23:59:45.025298 3155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a08f28a-435c-4180-8af5-1b9c1f60f93b-kube-api-access-967g5 podName:8a08f28a-435c-4180-8af5-1b9c1f60f93b nodeName:}" failed. No retries permitted until 2025-05-09 23:59:45.525278154 +0000 UTC m=+24.411180734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-967g5" (UniqueName: "kubernetes.io/projected/8a08f28a-435c-4180-8af5-1b9c1f60f93b-kube-api-access-967g5") pod "calico-apiserver-5dc459848f-p7gmj" (UID: "8a08f28a-435c-4180-8af5-1b9c1f60f93b") : failed to sync configmap cache: timed out waiting for the condition May 9 23:59:45.043411 containerd[1722]: time="2025-05-09T23:59:45.043177270Z" level=error msg="Failed to destroy network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.045374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4-shm.mount: Deactivated successfully. May 9 23:59:45.046151 containerd[1722]: time="2025-05-09T23:59:45.045576269Z" level=error msg="encountered an error cleaning up failed sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.046151 containerd[1722]: time="2025-05-09T23:59:45.045689909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f97fddfb-hvskp,Uid:1563bd12-d559-40ca-9df7-724c26d2d3e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.046248 kubelet[3155]: E0509 23:59:45.046035 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.046248 kubelet[3155]: E0509 23:59:45.046103 3155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9f97fddfb-hvskp" May 9 23:59:45.046248 kubelet[3155]: E0509 23:59:45.046123 3155 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9f97fddfb-hvskp" May 9 23:59:45.046344 kubelet[3155]: E0509 23:59:45.046165 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9f97fddfb-hvskp_calico-system(1563bd12-d559-40ca-9df7-724c26d2d3e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9f97fddfb-hvskp_calico-system(1563bd12-d559-40ca-9df7-724c26d2d3e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9f97fddfb-hvskp" podUID="1563bd12-d559-40ca-9df7-724c26d2d3e7" May 9 23:59:45.231347 systemd[1]: Created slice kubepods-besteffort-pod8ff719f3_efa6_438c_9acf_271732122094.slice - libcontainer container kubepods-besteffort-pod8ff719f3_efa6_438c_9acf_271732122094.slice. May 9 23:59:45.234480 containerd[1722]: time="2025-05-09T23:59:45.234430782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sw5t,Uid:8ff719f3-efa6-438c-9acf-271732122094,Namespace:calico-system,Attempt:0,}" May 9 23:59:45.273834 containerd[1722]: time="2025-05-09T23:59:45.271995492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55grc,Uid:b6350e5e-ccbb-494f-aa01-4897633a5f14,Namespace:kube-system,Attempt:0,}" May 9 23:59:45.276052 containerd[1722]: time="2025-05-09T23:59:45.276009931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrnrs,Uid:c6108b02-0f6c-4322-bab5-8d4e33e79daf,Namespace:kube-system,Attempt:0,}" May 9 23:59:45.363050 containerd[1722]: time="2025-05-09T23:59:45.362898110Z" level=error msg="Failed to destroy network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.363611 containerd[1722]: time="2025-05-09T23:59:45.363487910Z" level=error msg="encountered an error cleaning up failed sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.363611 containerd[1722]: time="2025-05-09T23:59:45.363564350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sw5t,Uid:8ff719f3-efa6-438c-9acf-271732122094,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.365105 kubelet[3155]: E0509 23:59:45.364012 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.365105 kubelet[3155]: E0509 23:59:45.364082 3155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sw5t" May 9 23:59:45.365105 kubelet[3155]: E0509 23:59:45.364103 3155 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sw5t" May 9 23:59:45.365278 kubelet[3155]: E0509 23:59:45.364149 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sw5t_calico-system(8ff719f3-efa6-438c-9acf-271732122094)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sw5t_calico-system(8ff719f3-efa6-438c-9acf-271732122094)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sw5t" podUID="8ff719f3-efa6-438c-9acf-271732122094" May 9 23:59:45.402223 kubelet[3155]: I0509 23:59:45.401010 3155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 9 23:59:45.402393 containerd[1722]: time="2025-05-09T23:59:45.401776180Z" level=info msg="StopPodSandbox for \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\"" May 9 23:59:45.402393 containerd[1722]: time="2025-05-09T23:59:45.401960100Z" level=info msg="Ensure that sandbox 2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4 in task-service has been cleanup successfully" May 9 23:59:45.408942 containerd[1722]: time="2025-05-09T23:59:45.408883498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 23:59:45.414660 kubelet[3155]: I0509 23:59:45.414166 3155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 9 23:59:45.415550 containerd[1722]: time="2025-05-09T23:59:45.415497977Z" level=info msg="StopPodSandbox for \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\"" May 9 23:59:45.415743 containerd[1722]: time="2025-05-09T23:59:45.415715977Z" level=info msg="Ensure that sandbox 8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b in task-service has been cleanup successfully" May 9 23:59:45.492380 containerd[1722]: time="2025-05-09T23:59:45.492218997Z" level=error msg="Failed to destroy network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.495780 containerd[1722]: time="2025-05-09T23:59:45.492578157Z" level=error msg="encountered an error cleaning up failed sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.495780 containerd[1722]: time="2025-05-09T23:59:45.492628957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55grc,Uid:b6350e5e-ccbb-494f-aa01-4897633a5f14,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.495890 kubelet[3155]: E0509 23:59:45.492881 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.495890 kubelet[3155]: E0509 23:59:45.492939 3155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-55grc" May 9 23:59:45.495890 kubelet[3155]: E0509 23:59:45.492960 3155 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-55grc" May 9 23:59:45.495987 kubelet[3155]: E0509 23:59:45.492999 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-55grc_kube-system(b6350e5e-ccbb-494f-aa01-4897633a5f14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-55grc_kube-system(b6350e5e-ccbb-494f-aa01-4897633a5f14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-55grc" podUID="b6350e5e-ccbb-494f-aa01-4897633a5f14" May 9 23:59:45.497915 containerd[1722]: time="2025-05-09T23:59:45.497802796Z" level=error msg="StopPodSandbox for \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\" failed" error="failed to destroy network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.498161 containerd[1722]: time="2025-05-09T23:59:45.497813436Z" level=error msg="StopPodSandbox for \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\" failed" error="failed to destroy network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.498434 kubelet[3155]: E0509 23:59:45.498379 3155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 9 23:59:45.498519 kubelet[3155]: E0509 23:59:45.498447 3155 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b"} May 9 23:59:45.498519 kubelet[3155]: E0509 23:59:45.498508 3155 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ff719f3-efa6-438c-9acf-271732122094\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 23:59:45.498622 kubelet[3155]: E0509 23:59:45.498530 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ff719f3-efa6-438c-9acf-271732122094\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sw5t" podUID="8ff719f3-efa6-438c-9acf-271732122094" May 9 23:59:45.498622 kubelet[3155]: E0509 23:59:45.498561 3155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 9 23:59:45.498622 kubelet[3155]: E0509 23:59:45.498585 3155 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4"} May 9 23:59:45.498622 kubelet[3155]: E0509 23:59:45.498604 3155 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1563bd12-d559-40ca-9df7-724c26d2d3e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 23:59:45.498843 kubelet[3155]: E0509 23:59:45.498619 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1563bd12-d559-40ca-9df7-724c26d2d3e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9f97fddfb-hvskp" podUID="1563bd12-d559-40ca-9df7-724c26d2d3e7" May 9 23:59:45.515597 containerd[1722]: time="2025-05-09T23:59:45.515538192Z" level=error msg="Failed to destroy network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.515947 containerd[1722]: time="2025-05-09T23:59:45.515912351Z" level=error msg="encountered an error cleaning up failed sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.515991 containerd[1722]: time="2025-05-09T23:59:45.515972231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrnrs,Uid:c6108b02-0f6c-4322-bab5-8d4e33e79daf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.516246 kubelet[3155]: E0509 23:59:45.516203 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.516321 kubelet[3155]: E0509 23:59:45.516271 3155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nrnrs" May 9 23:59:45.516321 kubelet[3155]: E0509 23:59:45.516294 3155 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nrnrs" May 9 23:59:45.516401 kubelet[3155]: E0509 23:59:45.516369 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nrnrs_kube-system(c6108b02-0f6c-4322-bab5-8d4e33e79daf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nrnrs_kube-system(c6108b02-0f6c-4322-bab5-8d4e33e79daf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nrnrs" podUID="c6108b02-0f6c-4322-bab5-8d4e33e79daf" May 9 23:59:45.566581 containerd[1722]: time="2025-05-09T23:59:45.566519099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-csr7s,Uid:bbe36fb6-41bc-4acd-b726-7ca42d2518b3,Namespace:calico-apiserver,Attempt:0,}" May 9 23:59:45.672957 containerd[1722]: time="2025-05-09T23:59:45.672826872Z" level=error msg="Failed to destroy network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.673493 containerd[1722]: time="2025-05-09T23:59:45.673360352Z" level=error msg="encountered an error cleaning up failed sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.673493 containerd[1722]: time="2025-05-09T23:59:45.673446392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-csr7s,Uid:bbe36fb6-41bc-4acd-b726-7ca42d2518b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.674692 kubelet[3155]: E0509 23:59:45.673891 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.674692 kubelet[3155]: E0509 23:59:45.673965 3155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" May 9 23:59:45.674692 kubelet[3155]: E0509 23:59:45.673984 3155 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" May 9 23:59:45.674885 kubelet[3155]: E0509 23:59:45.674033 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc459848f-csr7s_calico-apiserver(bbe36fb6-41bc-4acd-b726-7ca42d2518b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc459848f-csr7s_calico-apiserver(bbe36fb6-41bc-4acd-b726-7ca42d2518b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" podUID="bbe36fb6-41bc-4acd-b726-7ca42d2518b3" May 9 23:59:45.875053 containerd[1722]: time="2025-05-09T23:59:45.874860502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-p7gmj,Uid:8a08f28a-435c-4180-8af5-1b9c1f60f93b,Namespace:calico-apiserver,Attempt:0,}" May 9 23:59:45.981707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b-shm.mount: Deactivated successfully. May 9 23:59:45.992731 containerd[1722]: time="2025-05-09T23:59:45.992665872Z" level=error msg="Failed to destroy network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.993113 containerd[1722]: time="2025-05-09T23:59:45.993077792Z" level=error msg="encountered an error cleaning up failed sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.993181 containerd[1722]: time="2025-05-09T23:59:45.993156112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-p7gmj,Uid:8a08f28a-435c-4180-8af5-1b9c1f60f93b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.993860 kubelet[3155]: E0509 23:59:45.993433 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:45.993860 kubelet[3155]: E0509 23:59:45.993517 3155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc459848f-p7gmj" May 9 23:59:45.993860 kubelet[3155]: E0509 23:59:45.993554 3155 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc459848f-p7gmj" May 9 23:59:45.994016 kubelet[3155]: E0509 23:59:45.993602 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc459848f-p7gmj_calico-apiserver(8a08f28a-435c-4180-8af5-1b9c1f60f93b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc459848f-p7gmj_calico-apiserver(8a08f28a-435c-4180-8af5-1b9c1f60f93b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc459848f-p7gmj" podUID="8a08f28a-435c-4180-8af5-1b9c1f60f93b" May 9 23:59:45.996909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948-shm.mount: Deactivated successfully. May 9 23:59:46.417347 kubelet[3155]: I0509 23:59:46.417312 3155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 9 23:59:46.418137 containerd[1722]: time="2025-05-09T23:59:46.418092566Z" level=info msg="StopPodSandbox for \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\"" May 9 23:59:46.419404 containerd[1722]: time="2025-05-09T23:59:46.418289646Z" level=info msg="Ensure that sandbox 242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1 in task-service has been cleanup successfully" May 9 23:59:46.421027 kubelet[3155]: I0509 23:59:46.420992 3155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 9 23:59:46.421851 containerd[1722]: time="2025-05-09T23:59:46.421813325Z" level=info msg="StopPodSandbox for \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\"" May 9 23:59:46.424312 kubelet[3155]: I0509 23:59:46.423886 3155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 9 23:59:46.424431 containerd[1722]: time="2025-05-09T23:59:46.424034604Z" level=info msg="Ensure that sandbox db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948 in task-service has been cleanup successfully" May 9 23:59:46.424431 containerd[1722]: time="2025-05-09T23:59:46.424384164Z" level=info msg="StopPodSandbox for \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\"" May 9 23:59:46.424617 containerd[1722]: time="2025-05-09T23:59:46.424584924Z" level=info msg="Ensure that sandbox fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9 in task-service has been cleanup successfully" May 9 23:59:46.428592 kubelet[3155]: I0509 23:59:46.428551 3155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 9 23:59:46.429280 containerd[1722]: time="2025-05-09T23:59:46.429215283Z" level=info msg="StopPodSandbox for \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\"" May 9 23:59:46.430714 containerd[1722]: time="2025-05-09T23:59:46.430541243Z" level=info msg="Ensure that sandbox 3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229 in task-service has been cleanup successfully" May 9 23:59:46.488868 containerd[1722]: time="2025-05-09T23:59:46.488817228Z" level=error msg="StopPodSandbox for \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\" failed" error="failed to destroy network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:46.490567 containerd[1722]: time="2025-05-09T23:59:46.490441708Z" level=error msg="StopPodSandbox for \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\" failed" error="failed to destroy network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:46.490831 kubelet[3155]: E0509 23:59:46.490649 3155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 9 23:59:46.490831 kubelet[3155]: E0509 23:59:46.490704 3155 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948"} May 9 23:59:46.490831 kubelet[3155]: E0509 23:59:46.490740 3155 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a08f28a-435c-4180-8af5-1b9c1f60f93b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 23:59:46.490831 kubelet[3155]: E0509 23:59:46.490769 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a08f28a-435c-4180-8af5-1b9c1f60f93b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc459848f-p7gmj" podUID="8a08f28a-435c-4180-8af5-1b9c1f60f93b" May 9 23:59:46.491805 kubelet[3155]: E0509 23:59:46.490800 3155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 9 23:59:46.491805 kubelet[3155]: E0509 23:59:46.490817 3155 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9"} May 9 23:59:46.491805 kubelet[3155]: E0509 23:59:46.490835 3155 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbe36fb6-41bc-4acd-b726-7ca42d2518b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 23:59:46.491805 kubelet[3155]: E0509 23:59:46.490855 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbe36fb6-41bc-4acd-b726-7ca42d2518b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" podUID="bbe36fb6-41bc-4acd-b726-7ca42d2518b3" May 9 23:59:46.492348 containerd[1722]: time="2025-05-09T23:59:46.492135427Z" level=error msg="StopPodSandbox for \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\" failed" error="failed to destroy network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:46.492590 kubelet[3155]: E0509 23:59:46.492443 3155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 9 23:59:46.492590 kubelet[3155]: E0509 23:59:46.492499 3155 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1"} May 9 23:59:46.492590 kubelet[3155]: E0509 23:59:46.492530 3155 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6350e5e-ccbb-494f-aa01-4897633a5f14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 23:59:46.492590 kubelet[3155]: E0509 23:59:46.492550 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6350e5e-ccbb-494f-aa01-4897633a5f14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-55grc" podUID="b6350e5e-ccbb-494f-aa01-4897633a5f14" May 9 23:59:46.502390 containerd[1722]: time="2025-05-09T23:59:46.501949025Z" level=error msg="StopPodSandbox for \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\" failed" error="failed to destroy network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 23:59:46.502554 kubelet[3155]: E0509 23:59:46.502186 3155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 9 23:59:46.502554 kubelet[3155]: E0509 23:59:46.502234 3155 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229"} May 9 23:59:46.502554 kubelet[3155]: E0509 23:59:46.502267 3155 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6108b02-0f6c-4322-bab5-8d4e33e79daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 23:59:46.502554 kubelet[3155]: E0509 23:59:46.502295 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6108b02-0f6c-4322-bab5-8d4e33e79daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nrnrs" podUID="c6108b02-0f6c-4322-bab5-8d4e33e79daf" May 9 23:59:49.392288 kubelet[3155]: I0509 23:59:49.391729 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 23:59:49.599550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298444420.mount: Deactivated successfully. May 9 23:59:49.999999 containerd[1722]: time="2025-05-09T23:59:49.999262710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:50.004172 containerd[1722]: time="2025-05-09T23:59:50.004119349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 9 23:59:50.008785 containerd[1722]: time="2025-05-09T23:59:50.008735068Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:50.014803 containerd[1722]: time="2025-05-09T23:59:50.014730546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:50.015890 containerd[1722]: time="2025-05-09T23:59:50.015388146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.606452808s" May 9 23:59:50.015890 containerd[1722]: time="2025-05-09T23:59:50.015431746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 9 23:59:50.039353 containerd[1722]: time="2025-05-09T23:59:50.038758380Z" level=info msg="CreateContainer within sandbox \"d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 23:59:50.105664 containerd[1722]: time="2025-05-09T23:59:50.105586324Z" level=info msg="CreateContainer within sandbox \"d8cc9989f7fe88ffdb2fe61aaa07bfcebb158efc929b12bbfdba8bbe63541502\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"09e00de07aa88963e77cab300e9aaf4bc340f1e5d55dbe39f2f9f009667678ed\"" May 9 23:59:50.107811 containerd[1722]: time="2025-05-09T23:59:50.106858523Z" level=info msg="StartContainer for \"09e00de07aa88963e77cab300e9aaf4bc340f1e5d55dbe39f2f9f009667678ed\"" May 9 23:59:50.142896 systemd[1]: Started cri-containerd-09e00de07aa88963e77cab300e9aaf4bc340f1e5d55dbe39f2f9f009667678ed.scope - libcontainer container 09e00de07aa88963e77cab300e9aaf4bc340f1e5d55dbe39f2f9f009667678ed. May 9 23:59:50.177343 containerd[1722]: time="2025-05-09T23:59:50.177049746Z" level=info msg="StartContainer for \"09e00de07aa88963e77cab300e9aaf4bc340f1e5d55dbe39f2f9f009667678ed\" returns successfully" May 9 23:59:50.417498 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 23:59:50.417679 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 23:59:50.467416 kubelet[3155]: I0509 23:59:50.467321 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cdqhg" podStartSLOduration=1.8660395429999999 podStartE2EDuration="16.467305633s" podCreationTimestamp="2025-05-09 23:59:34 +0000 UTC" firstStartedPulling="2025-05-09 23:59:35.415625696 +0000 UTC m=+14.301528236" lastFinishedPulling="2025-05-09 23:59:50.016891786 +0000 UTC m=+28.902794326" observedRunningTime="2025-05-09 23:59:50.467064433 +0000 UTC m=+29.352967013" watchObservedRunningTime="2025-05-09 23:59:50.467305633 +0000 UTC m=+29.353208213" May 9 23:59:52.057729 kernel: bpftool[4430]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 23:59:52.268550 systemd-networkd[1507]: vxlan.calico: Link UP May 9 23:59:52.268559 systemd-networkd[1507]: vxlan.calico: Gained carrier May 9 23:59:53.375922 systemd-networkd[1507]: vxlan.calico: Gained IPv6LL May 9 23:59:57.227662 containerd[1722]: time="2025-05-09T23:59:57.227327351Z" level=info msg="StopPodSandbox for \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\"" May 9 23:59:57.227662 containerd[1722]: time="2025-05-09T23:59:57.227594831Z" level=info msg="StopPodSandbox for \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\"" May 9 23:59:57.229759 containerd[1722]: time="2025-05-09T23:59:57.227329551Z" level=info msg="StopPodSandbox for \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\"" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.446 [INFO][4543] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.446 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" iface="eth0" netns="/var/run/netns/cni-c2b2f4d9-3f55-698a-c6fa-7918700d9bf0" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.446 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" iface="eth0" netns="/var/run/netns/cni-c2b2f4d9-3f55-698a-c6fa-7918700d9bf0" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.447 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" iface="eth0" netns="/var/run/netns/cni-c2b2f4d9-3f55-698a-c6fa-7918700d9bf0" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.447 [INFO][4543] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.447 [INFO][4543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.487 [INFO][4565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.487 [INFO][4565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.487 [INFO][4565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.497 [WARNING][4565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.497 [INFO][4565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.500 [INFO][4565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:57.510229 containerd[1722]: 2025-05-09 23:59:57.506 [INFO][4543] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 9 23:59:57.512930 containerd[1722]: time="2025-05-09T23:59:57.512767730Z" level=info msg="TearDown network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\" successfully" May 9 23:59:57.512930 containerd[1722]: time="2025-05-09T23:59:57.512810650Z" level=info msg="StopPodSandbox for \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\" returns successfully" May 9 23:59:57.515121 systemd[1]: run-netns-cni\x2dc2b2f4d9\x2d3f55\x2d698a\x2dc6fa\x2d7918700d9bf0.mount: Deactivated successfully. May 9 23:59:57.519611 containerd[1722]: time="2025-05-09T23:59:57.519564088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f97fddfb-hvskp,Uid:1563bd12-d559-40ca-9df7-724c26d2d3e7,Namespace:calico-system,Attempt:1,}" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.451 [INFO][4554] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.453 [INFO][4554] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" iface="eth0" netns="/var/run/netns/cni-ef82b84c-5204-358e-3215-e95a8c16bfa6" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.454 [INFO][4554] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" iface="eth0" netns="/var/run/netns/cni-ef82b84c-5204-358e-3215-e95a8c16bfa6" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.456 [INFO][4554] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" iface="eth0" netns="/var/run/netns/cni-ef82b84c-5204-358e-3215-e95a8c16bfa6" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.456 [INFO][4554] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.456 [INFO][4554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.517 [INFO][4570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.518 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.518 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.530 [WARNING][4570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.530 [INFO][4570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.532 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:57.538887 containerd[1722]: 2025-05-09 23:59:57.535 [INFO][4554] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 9 23:59:57.541565 systemd[1]: run-netns-cni\x2def82b84c\x2d5204\x2d358e\x2d3215\x2de95a8c16bfa6.mount: Deactivated successfully. May 9 23:59:57.542013 containerd[1722]: time="2025-05-09T23:59:57.541700923Z" level=info msg="TearDown network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\" successfully" May 9 23:59:57.542013 containerd[1722]: time="2025-05-09T23:59:57.541740603Z" level=info msg="StopPodSandbox for \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\" returns successfully" May 9 23:59:57.543654 containerd[1722]: time="2025-05-09T23:59:57.543243323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-csr7s,Uid:bbe36fb6-41bc-4acd-b726-7ca42d2518b3,Namespace:calico-apiserver,Attempt:1,}" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.455 [INFO][4544] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.455 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" iface="eth0" netns="/var/run/netns/cni-c401b532-e4de-38a9-c03d-7ef175df9b86" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.456 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" iface="eth0" netns="/var/run/netns/cni-c401b532-e4de-38a9-c03d-7ef175df9b86" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.456 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" iface="eth0" netns="/var/run/netns/cni-c401b532-e4de-38a9-c03d-7ef175df9b86" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.456 [INFO][4544] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.456 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.527 [INFO][4572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.527 [INFO][4572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.532 [INFO][4572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.550 [WARNING][4572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.550 [INFO][4572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.553 [INFO][4572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:57.559256 containerd[1722]: 2025-05-09 23:59:57.557 [INFO][4544] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 9 23:59:57.559724 containerd[1722]: time="2025-05-09T23:59:57.559408760Z" level=info msg="TearDown network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\" successfully" May 9 23:59:57.559724 containerd[1722]: time="2025-05-09T23:59:57.559435200Z" level=info msg="StopPodSandbox for \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\" returns successfully" May 9 23:59:57.562311 containerd[1722]: time="2025-05-09T23:59:57.562255799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sw5t,Uid:8ff719f3-efa6-438c-9acf-271732122094,Namespace:calico-system,Attempt:1,}" May 9 23:59:57.563056 systemd[1]: run-netns-cni\x2dc401b532\x2de4de\x2d38a9\x2dc03d\x2d7ef175df9b86.mount: Deactivated successfully. May 9 23:59:57.809547 systemd-networkd[1507]: calib98dafff4f0: Link UP May 9 23:59:57.811044 systemd-networkd[1507]: calib98dafff4f0: Gained carrier May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.654 [INFO][4589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0 calico-kube-controllers-9f97fddfb- calico-system 1563bd12-d559-40ca-9df7-724c26d2d3e7 766 0 2025-05-09 23:59:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9f97fddfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-84ab9604c4 calico-kube-controllers-9f97fddfb-hvskp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib98dafff4f0 [] []}} ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.654 [INFO][4589] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.711 [INFO][4619] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" HandleID="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.732 [INFO][4619] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" HandleID="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011c270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-84ab9604c4", "pod":"calico-kube-controllers-9f97fddfb-hvskp", "timestamp":"2025-05-09 23:59:57.711562287 +0000 UTC"}, Hostname:"ci-4081.3.3-n-84ab9604c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.732 [INFO][4619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.732 [INFO][4619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.732 [INFO][4619] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-84ab9604c4' May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.737 [INFO][4619] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.748 [INFO][4619] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.760 [INFO][4619] ipam/ipam.go 489: Trying affinity for 192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.764 [INFO][4619] ipam/ipam.go 155: Attempting to load block cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.771 [INFO][4619] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.771 [INFO][4619] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.192/26 handle="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.777 [INFO][4619] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49 May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.789 [INFO][4619] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.39.192/26 handle="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.801 [INFO][4619] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.39.193/26] block=192.168.39.192/26 handle="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.801 [INFO][4619] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.193/26] handle="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.801 [INFO][4619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:57.846554 containerd[1722]: 2025-05-09 23:59:57.801 [INFO][4619] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.193/26] IPv6=[] ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" HandleID="k8s-pod-network.57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.847182 containerd[1722]: 2025-05-09 23:59:57.804 [INFO][4589] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0", GenerateName:"calico-kube-controllers-9f97fddfb-", Namespace:"calico-system", SelfLink:"", UID:"1563bd12-d559-40ca-9df7-724c26d2d3e7", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f97fddfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"", Pod:"calico-kube-controllers-9f97fddfb-hvskp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib98dafff4f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:57.847182 containerd[1722]: 2025-05-09 23:59:57.805 [INFO][4589] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.39.193/32] ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.847182 containerd[1722]: 2025-05-09 23:59:57.805 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib98dafff4f0 ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.847182 containerd[1722]: 2025-05-09 23:59:57.808 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.847182 containerd[1722]: 2025-05-09 23:59:57.808 [INFO][4589] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0", GenerateName:"calico-kube-controllers-9f97fddfb-", Namespace:"calico-system", SelfLink:"", UID:"1563bd12-d559-40ca-9df7-724c26d2d3e7", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f97fddfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49", Pod:"calico-kube-controllers-9f97fddfb-hvskp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib98dafff4f0", MAC:"12:03:64:5e:f7:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:57.847182 containerd[1722]: 2025-05-09 23:59:57.839 [INFO][4589] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49" Namespace="calico-system" Pod="calico-kube-controllers-9f97fddfb-hvskp" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 9 23:59:57.903243 containerd[1722]: time="2025-05-09T23:59:57.903099685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:57.903243 containerd[1722]: time="2025-05-09T23:59:57.903170005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:57.903243 containerd[1722]: time="2025-05-09T23:59:57.903196525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:57.904710 containerd[1722]: time="2025-05-09T23:59:57.904070725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:57.932008 systemd[1]: Started cri-containerd-57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49.scope - libcontainer container 57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49. May 9 23:59:57.940043 systemd-networkd[1507]: cali39e19212d44: Link UP May 9 23:59:57.940917 systemd-networkd[1507]: cali39e19212d44: Gained carrier May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.706 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0 calico-apiserver-5dc459848f- calico-apiserver bbe36fb6-41bc-4acd-b726-7ca42d2518b3 767 0 2025-05-09 23:59:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc459848f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-84ab9604c4 calico-apiserver-5dc459848f-csr7s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39e19212d44 [] []}} ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.707 [INFO][4606] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.770 [INFO][4634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" HandleID="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.850 [INFO][4634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" HandleID="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003196f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-84ab9604c4", "pod":"calico-apiserver-5dc459848f-csr7s", "timestamp":"2025-05-09 23:59:57.770111354 +0000 UTC"}, Hostname:"ci-4081.3.3-n-84ab9604c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.851 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.851 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.851 [INFO][4634] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-84ab9604c4' May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.858 [INFO][4634] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.867 [INFO][4634] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.879 [INFO][4634] ipam/ipam.go 489: Trying affinity for 192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.883 [INFO][4634] ipam/ipam.go 155: Attempting to load block cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.889 [INFO][4634] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.889 [INFO][4634] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.192/26 handle="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.891 [INFO][4634] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.898 [INFO][4634] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.39.192/26 handle="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.916 [INFO][4634] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.39.194/26] block=192.168.39.192/26 handle="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.916 [INFO][4634] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.194/26] handle="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.916 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:57.989603 containerd[1722]: 2025-05-09 23:59:57.916 [INFO][4634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.194/26] IPv6=[] ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" HandleID="k8s-pod-network.bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.990896 containerd[1722]: 2025-05-09 23:59:57.931 [INFO][4606] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbe36fb6-41bc-4acd-b726-7ca42d2518b3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"", Pod:"calico-apiserver-5dc459848f-csr7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39e19212d44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:57.990896 containerd[1722]: 2025-05-09 23:59:57.932 [INFO][4606] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.39.194/32] ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.990896 containerd[1722]: 2025-05-09 23:59:57.932 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39e19212d44 ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.990896 containerd[1722]: 2025-05-09 23:59:57.941 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:57.990896 containerd[1722]: 2025-05-09 23:59:57.943 [INFO][4606] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbe36fb6-41bc-4acd-b726-7ca42d2518b3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a", Pod:"calico-apiserver-5dc459848f-csr7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39e19212d44", MAC:"aa:2b:5a:80:6c:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:57.990896 containerd[1722]: 2025-05-09 23:59:57.978 [INFO][4606] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-csr7s" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 9 23:59:58.037368 systemd-networkd[1507]: cali77c1d22eb3a: Link UP May 9 23:59:58.043442 systemd-networkd[1507]: cali77c1d22eb3a: Gained carrier May 9 23:59:58.048082 containerd[1722]: time="2025-05-09T23:59:58.047598334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f97fddfb-hvskp,Uid:1563bd12-d559-40ca-9df7-724c26d2d3e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49\"" May 9 23:59:58.055121 containerd[1722]: time="2025-05-09T23:59:58.054981052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 9 23:59:58.084236 containerd[1722]: time="2025-05-09T23:59:58.081170167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:58.084236 containerd[1722]: time="2025-05-09T23:59:58.081384047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:58.084236 containerd[1722]: time="2025-05-09T23:59:58.081400167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.084236 containerd[1722]: time="2025-05-09T23:59:58.081546087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.803 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0 csi-node-driver- calico-system 8ff719f3-efa6-438c-9acf-271732122094 768 0 2025-05-09 23:59:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-84ab9604c4 csi-node-driver-7sw5t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali77c1d22eb3a [] []}} ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.803 [INFO][4629] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.890 [INFO][4653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" HandleID="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.950 [INFO][4653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" HandleID="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003798f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-84ab9604c4", "pod":"csi-node-driver-7sw5t", "timestamp":"2025-05-09 23:59:57.890566448 +0000 UTC"}, Hostname:"ci-4081.3.3-n-84ab9604c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.950 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.950 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.950 [INFO][4653] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-84ab9604c4' May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.957 [INFO][4653] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.969 [INFO][4653] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.984 [INFO][4653] ipam/ipam.go 489: Trying affinity for 192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.988 [INFO][4653] ipam/ipam.go 155: Attempting to load block cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.993 [INFO][4653] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.993 [INFO][4653] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.192/26 handle="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:57.997 [INFO][4653] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556 May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:58.005 [INFO][4653] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.39.192/26 handle="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:58.026 [INFO][4653] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.39.195/26] block=192.168.39.192/26 handle="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:58.026 [INFO][4653] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.195/26] handle="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:58.026 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:58.084928 containerd[1722]: 2025-05-09 23:59:58.026 [INFO][4653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.195/26] IPv6=[] ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" HandleID="k8s-pod-network.a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:58.086909 containerd[1722]: 2025-05-09 23:59:58.033 [INFO][4629] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ff719f3-efa6-438c-9acf-271732122094", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"", Pod:"csi-node-driver-7sw5t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77c1d22eb3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:58.086909 containerd[1722]: 2025-05-09 23:59:58.033 [INFO][4629] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.39.195/32] ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:58.086909 containerd[1722]: 2025-05-09 23:59:58.033 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77c1d22eb3a ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:58.086909 containerd[1722]: 2025-05-09 23:59:58.046 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:58.086909 containerd[1722]: 2025-05-09 23:59:58.049 [INFO][4629] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ff719f3-efa6-438c-9acf-271732122094", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556", Pod:"csi-node-driver-7sw5t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77c1d22eb3a", MAC:"0e:3f:8b:29:43:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:58.086909 containerd[1722]: 2025-05-09 23:59:58.075 [INFO][4629] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556" Namespace="calico-system" Pod="csi-node-driver-7sw5t" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 9 23:59:58.116899 systemd[1]: Started cri-containerd-bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a.scope - libcontainer container bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a. May 9 23:59:58.131949 containerd[1722]: time="2025-05-09T23:59:58.131588276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:58.131949 containerd[1722]: time="2025-05-09T23:59:58.131707596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:58.131949 containerd[1722]: time="2025-05-09T23:59:58.131719036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.131949 containerd[1722]: time="2025-05-09T23:59:58.131800156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.161349 systemd[1]: Started cri-containerd-a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556.scope - libcontainer container a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556. May 9 23:59:58.164575 containerd[1722]: time="2025-05-09T23:59:58.164261509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-csr7s,Uid:bbe36fb6-41bc-4acd-b726-7ca42d2518b3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a\"" May 9 23:59:58.189830 containerd[1722]: time="2025-05-09T23:59:58.189589543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sw5t,Uid:8ff719f3-efa6-438c-9acf-271732122094,Namespace:calico-system,Attempt:1,} returns sandbox id \"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556\"" May 9 23:59:58.224029 containerd[1722]: time="2025-05-09T23:59:58.223966736Z" level=info msg="StopPodSandbox for \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\"" May 9 23:59:58.224369 containerd[1722]: time="2025-05-09T23:59:58.224253896Z" level=info msg="StopPodSandbox for \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\"" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.296 [INFO][4851] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.298 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" iface="eth0" netns="/var/run/netns/cni-b1d8fa97-40e3-188c-2e41-16c72d38d146" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.299 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" iface="eth0" netns="/var/run/netns/cni-b1d8fa97-40e3-188c-2e41-16c72d38d146" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.300 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" iface="eth0" netns="/var/run/netns/cni-b1d8fa97-40e3-188c-2e41-16c72d38d146" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.300 [INFO][4851] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.300 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.333 [INFO][4861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.334 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.334 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.344 [WARNING][4861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.344 [INFO][4861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.346 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:58.353607 containerd[1722]: 2025-05-09 23:59:58.349 [INFO][4851] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 9 23:59:58.355200 containerd[1722]: time="2025-05-09T23:59:58.354737788Z" level=info msg="TearDown network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\" successfully" May 9 23:59:58.355200 containerd[1722]: time="2025-05-09T23:59:58.354785108Z" level=info msg="StopPodSandbox for \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\" returns successfully" May 9 23:59:58.373631 containerd[1722]: time="2025-05-09T23:59:58.373420864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55grc,Uid:b6350e5e-ccbb-494f-aa01-4897633a5f14,Namespace:kube-system,Attempt:1,}" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.301 [INFO][4844] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.301 [INFO][4844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" iface="eth0" netns="/var/run/netns/cni-305cb63e-4b38-ec3c-3a54-8810ac93723b" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.301 [INFO][4844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" iface="eth0" netns="/var/run/netns/cni-305cb63e-4b38-ec3c-3a54-8810ac93723b" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.301 [INFO][4844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" iface="eth0" netns="/var/run/netns/cni-305cb63e-4b38-ec3c-3a54-8810ac93723b" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.301 [INFO][4844] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.301 [INFO][4844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.338 [INFO][4863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.338 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.346 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.366 [WARNING][4863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.366 [INFO][4863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.370 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:58.377295 containerd[1722]: 2025-05-09 23:59:58.374 [INFO][4844] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 9 23:59:58.377295 containerd[1722]: time="2025-05-09T23:59:58.377273943Z" level=info msg="TearDown network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\" successfully" May 9 23:59:58.377295 containerd[1722]: time="2025-05-09T23:59:58.377305503Z" level=info msg="StopPodSandbox for \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\" returns successfully" May 9 23:59:58.378318 containerd[1722]: time="2025-05-09T23:59:58.378266623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrnrs,Uid:c6108b02-0f6c-4322-bab5-8d4e33e79daf,Namespace:kube-system,Attempt:1,}" May 9 23:59:58.529061 systemd[1]: run-netns-cni\x2d305cb63e\x2d4b38\x2dec3c\x2d3a54\x2d8810ac93723b.mount: Deactivated successfully. May 9 23:59:58.529158 systemd[1]: run-netns-cni\x2db1d8fa97\x2d40e3\x2d188c\x2d2e41\x2d16c72d38d146.mount: Deactivated successfully. May 9 23:59:58.634347 systemd-networkd[1507]: calie98c5dbbe03: Link UP May 9 23:59:58.634489 systemd-networkd[1507]: calie98c5dbbe03: Gained carrier May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.516 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0 coredns-668d6bf9bc- kube-system b6350e5e-ccbb-494f-aa01-4897633a5f14 784 0 2025-05-09 23:59:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-84ab9604c4 coredns-668d6bf9bc-55grc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie98c5dbbe03 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.517 [INFO][4874] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.570 [INFO][4900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" HandleID="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.587 [INFO][4900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" HandleID="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003187a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-84ab9604c4", "pod":"coredns-668d6bf9bc-55grc", "timestamp":"2025-05-09 23:59:58.570489541 +0000 UTC"}, Hostname:"ci-4081.3.3-n-84ab9604c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.588 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.588 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.588 [INFO][4900] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-84ab9604c4' May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.591 [INFO][4900] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.598 [INFO][4900] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.604 [INFO][4900] ipam/ipam.go 489: Trying affinity for 192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.606 [INFO][4900] ipam/ipam.go 155: Attempting to load block cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.609 [INFO][4900] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.609 [INFO][4900] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.192/26 handle="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.611 [INFO][4900] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.616 [INFO][4900] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.39.192/26 handle="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.625 [INFO][4900] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.39.196/26] block=192.168.39.192/26 handle="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.625 [INFO][4900] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.196/26] handle="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.626 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:58.659385 containerd[1722]: 2025-05-09 23:59:58.626 [INFO][4900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.196/26] IPv6=[] ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" HandleID="k8s-pod-network.609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.659991 containerd[1722]: 2025-05-09 23:59:58.630 [INFO][4874] cni-plugin/k8s.go 386: Populated endpoint ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6350e5e-ccbb-494f-aa01-4897633a5f14", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"", Pod:"coredns-668d6bf9bc-55grc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98c5dbbe03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:58.659991 containerd[1722]: 2025-05-09 23:59:58.630 [INFO][4874] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.39.196/32] ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.659991 containerd[1722]: 2025-05-09 23:59:58.630 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie98c5dbbe03 ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.659991 containerd[1722]: 2025-05-09 23:59:58.632 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.659991 containerd[1722]: 2025-05-09 23:59:58.634 [INFO][4874] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6350e5e-ccbb-494f-aa01-4897633a5f14", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e", Pod:"coredns-668d6bf9bc-55grc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98c5dbbe03", MAC:"8a:25:54:06:58:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:58.659991 containerd[1722]: 2025-05-09 23:59:58.656 [INFO][4874] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-55grc" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 9 23:59:58.700189 containerd[1722]: time="2025-05-09T23:59:58.697682394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:58.700189 containerd[1722]: time="2025-05-09T23:59:58.699588033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:58.700189 containerd[1722]: time="2025-05-09T23:59:58.699607073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.700189 containerd[1722]: time="2025-05-09T23:59:58.699769233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.736886 systemd[1]: Started cri-containerd-609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e.scope - libcontainer container 609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e. May 9 23:59:58.762090 systemd-networkd[1507]: calif66e6889ef7: Link UP May 9 23:59:58.763520 systemd-networkd[1507]: calif66e6889ef7: Gained carrier May 9 23:59:58.790536 containerd[1722]: time="2025-05-09T23:59:58.790492413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55grc,Uid:b6350e5e-ccbb-494f-aa01-4897633a5f14,Namespace:kube-system,Attempt:1,} returns sandbox id \"609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e\"" May 9 23:59:58.800032 containerd[1722]: time="2025-05-09T23:59:58.799753891Z" level=info msg="CreateContainer within sandbox \"609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.540 [INFO][4878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0 coredns-668d6bf9bc- kube-system c6108b02-0f6c-4322-bab5-8d4e33e79daf 785 0 2025-05-09 23:59:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-84ab9604c4 coredns-668d6bf9bc-nrnrs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif66e6889ef7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.540 [INFO][4878] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.581 [INFO][4906] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" HandleID="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.601 [INFO][4906] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" HandleID="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000282eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-84ab9604c4", "pod":"coredns-668d6bf9bc-nrnrs", "timestamp":"2025-05-09 23:59:58.581771339 +0000 UTC"}, Hostname:"ci-4081.3.3-n-84ab9604c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.601 [INFO][4906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.625 [INFO][4906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.626 [INFO][4906] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-84ab9604c4' May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.694 [INFO][4906] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.702 [INFO][4906] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.711 [INFO][4906] ipam/ipam.go 489: Trying affinity for 192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.715 [INFO][4906] ipam/ipam.go 155: Attempting to load block cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.718 [INFO][4906] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.718 [INFO][4906] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.192/26 handle="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.723 [INFO][4906] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048 May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.739 [INFO][4906] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.39.192/26 handle="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.751 [INFO][4906] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.39.197/26] block=192.168.39.192/26 handle="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.751 [INFO][4906] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.197/26] handle="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" host="ci-4081.3.3-n-84ab9604c4" May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.751 [INFO][4906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 23:59:58.803021 containerd[1722]: 2025-05-09 23:59:58.751 [INFO][4906] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.197/26] IPv6=[] ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" HandleID="k8s-pod-network.a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.803561 containerd[1722]: 2025-05-09 23:59:58.754 [INFO][4878] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c6108b02-0f6c-4322-bab5-8d4e33e79daf", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"", Pod:"coredns-668d6bf9bc-nrnrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif66e6889ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:58.803561 containerd[1722]: 2025-05-09 23:59:58.755 [INFO][4878] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.39.197/32] ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.803561 containerd[1722]: 2025-05-09 23:59:58.755 [INFO][4878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif66e6889ef7 ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.803561 containerd[1722]: 2025-05-09 23:59:58.764 [INFO][4878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.803561 containerd[1722]: 2025-05-09 23:59:58.764 [INFO][4878] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c6108b02-0f6c-4322-bab5-8d4e33e79daf", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048", Pod:"coredns-668d6bf9bc-nrnrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif66e6889ef7", MAC:"52:ba:13:e2:6d:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 23:59:58.803561 containerd[1722]: 2025-05-09 23:59:58.796 [INFO][4878] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrnrs" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 9 23:59:58.848927 containerd[1722]: time="2025-05-09T23:59:58.848489601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:58.848927 containerd[1722]: time="2025-05-09T23:59:58.848561441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:58.848927 containerd[1722]: time="2025-05-09T23:59:58.848577641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.848927 containerd[1722]: time="2025-05-09T23:59:58.848690841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:58.888845 systemd[1]: Started cri-containerd-a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048.scope - libcontainer container a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048. May 9 23:59:58.916750 containerd[1722]: time="2025-05-09T23:59:58.916363226Z" level=info msg="CreateContainer within sandbox \"609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bee08df35d2aa381b4a4336e3167368305b783d36ae2051f0d90cce9300aac4c\"" May 9 23:59:58.918667 containerd[1722]: time="2025-05-09T23:59:58.918575306Z" level=info msg="StartContainer for \"bee08df35d2aa381b4a4336e3167368305b783d36ae2051f0d90cce9300aac4c\"" May 9 23:59:58.935527 containerd[1722]: time="2025-05-09T23:59:58.935485262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrnrs,Uid:c6108b02-0f6c-4322-bab5-8d4e33e79daf,Namespace:kube-system,Attempt:1,} returns sandbox id \"a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048\"" May 9 23:59:58.944984 containerd[1722]: time="2025-05-09T23:59:58.944860020Z" level=info msg="CreateContainer within sandbox \"a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:58.957911 systemd[1]: Started cri-containerd-bee08df35d2aa381b4a4336e3167368305b783d36ae2051f0d90cce9300aac4c.scope - libcontainer container bee08df35d2aa381b4a4336e3167368305b783d36ae2051f0d90cce9300aac4c. May 9 23:59:58.999261 containerd[1722]: time="2025-05-09T23:59:58.999111808Z" level=info msg="StartContainer for \"bee08df35d2aa381b4a4336e3167368305b783d36ae2051f0d90cce9300aac4c\" returns successfully" May 9 23:59:59.040265 containerd[1722]: time="2025-05-09T23:59:59.040207159Z" level=info msg="CreateContainer within sandbox \"a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d5e837240bdc5ee45862d403999a563f023bf02b9e7bb9f3b0f977a49a8ce36\"" May 9 23:59:59.041187 containerd[1722]: time="2025-05-09T23:59:59.041150959Z" level=info msg="StartContainer for \"2d5e837240bdc5ee45862d403999a563f023bf02b9e7bb9f3b0f977a49a8ce36\"" May 9 23:59:59.076859 systemd[1]: Started cri-containerd-2d5e837240bdc5ee45862d403999a563f023bf02b9e7bb9f3b0f977a49a8ce36.scope - libcontainer container 2d5e837240bdc5ee45862d403999a563f023bf02b9e7bb9f3b0f977a49a8ce36. May 9 23:59:59.110789 containerd[1722]: time="2025-05-09T23:59:59.110724264Z" level=info msg="StartContainer for \"2d5e837240bdc5ee45862d403999a563f023bf02b9e7bb9f3b0f977a49a8ce36\" returns successfully" May 9 23:59:59.135833 systemd-networkd[1507]: calib98dafff4f0: Gained IPv6LL May 9 23:59:59.504897 kubelet[3155]: I0509 23:59:59.503866 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nrnrs" podStartSLOduration=32.503845899 podStartE2EDuration="32.503845899s" podCreationTimestamp="2025-05-09 23:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:59.503520579 +0000 UTC m=+38.389423119" watchObservedRunningTime="2025-05-09 23:59:59.503845899 +0000 UTC m=+38.389748479" May 9 23:59:59.968324 systemd-networkd[1507]: cali39e19212d44: Gained IPv6LL May 10 00:00:00.032470 systemd-networkd[1507]: cali77c1d22eb3a: Gained IPv6LL May 10 00:00:00.425443 containerd[1722]: time="2025-05-10T00:00:00.425294901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:00.433814 containerd[1722]: time="2025-05-10T00:00:00.433540219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:00:00.456845 containerd[1722]: time="2025-05-10T00:00:00.456767135Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:00.463028 containerd[1722]: time="2025-05-10T00:00:00.462888773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:00.463887 containerd[1722]: time="2025-05-10T00:00:00.463783533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.408704081s" May 10 00:00:00.463887 containerd[1722]: time="2025-05-10T00:00:00.463828453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:00:00.470366 containerd[1722]: time="2025-05-10T00:00:00.470267972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:00:00.471160 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 10 00:00:00.498690 containerd[1722]: time="2025-05-10T00:00:00.498408446Z" level=info msg="CreateContainer within sandbox \"57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:00:00.514503 systemd[1]: logrotate.service: Deactivated successfully. May 10 00:00:00.543784 systemd-networkd[1507]: calie98c5dbbe03: Gained IPv6LL May 10 00:00:00.570777 containerd[1722]: time="2025-05-10T00:00:00.570719671Z" level=info msg="CreateContainer within sandbox \"57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3fec5544c97b5ff89fe2d231f3747d2ca61fd0a508a34a6492034cf978b082b9\"" May 10 00:00:00.572070 containerd[1722]: time="2025-05-10T00:00:00.571987951Z" level=info msg="StartContainer for \"3fec5544c97b5ff89fe2d231f3747d2ca61fd0a508a34a6492034cf978b082b9\"" May 10 00:00:00.612863 systemd[1]: Started cri-containerd-3fec5544c97b5ff89fe2d231f3747d2ca61fd0a508a34a6492034cf978b082b9.scope - libcontainer container 3fec5544c97b5ff89fe2d231f3747d2ca61fd0a508a34a6492034cf978b082b9. May 10 00:00:00.651608 containerd[1722]: time="2025-05-10T00:00:00.651532734Z" level=info msg="StartContainer for \"3fec5544c97b5ff89fe2d231f3747d2ca61fd0a508a34a6492034cf978b082b9\" returns successfully" May 10 00:00:00.671794 systemd-networkd[1507]: calif66e6889ef7: Gained IPv6LL May 10 00:00:01.518406 kubelet[3155]: I0510 00:00:01.517889 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9f97fddfb-hvskp" podStartSLOduration=24.100481954 podStartE2EDuration="26.517867793s" podCreationTimestamp="2025-05-09 23:59:35 +0000 UTC" firstStartedPulling="2025-05-09 23:59:58.052436653 +0000 UTC m=+36.938339193" lastFinishedPulling="2025-05-10 00:00:00.469822412 +0000 UTC m=+39.355725032" observedRunningTime="2025-05-10 00:00:01.516310193 +0000 UTC m=+40.402212773" watchObservedRunningTime="2025-05-10 00:00:01.517867793 +0000 UTC m=+40.403770373" May 10 00:00:01.518406 kubelet[3155]: I0510 00:00:01.518003 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-55grc" podStartSLOduration=34.517997993 podStartE2EDuration="34.517997993s" podCreationTimestamp="2025-05-09 23:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:59.555411648 +0000 UTC m=+38.441314228" watchObservedRunningTime="2025-05-10 00:00:01.517997993 +0000 UTC m=+40.403900573" May 10 00:00:02.223675 containerd[1722]: time="2025-05-10T00:00:02.223205805Z" level=info msg="StopPodSandbox for \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\"" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.285 [INFO][5159] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.285 [INFO][5159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" iface="eth0" netns="/var/run/netns/cni-80d0edda-626e-0e3a-d241-53af20bf096d" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.285 [INFO][5159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" iface="eth0" netns="/var/run/netns/cni-80d0edda-626e-0e3a-d241-53af20bf096d" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.287 [INFO][5159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" iface="eth0" netns="/var/run/netns/cni-80d0edda-626e-0e3a-d241-53af20bf096d" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.287 [INFO][5159] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.287 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.317 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.317 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.317 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.331 [WARNING][5166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.331 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.334 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:02.341058 containerd[1722]: 2025-05-10 00:00:02.336 [INFO][5159] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:02.341058 containerd[1722]: time="2025-05-10T00:00:02.340712700Z" level=info msg="TearDown network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\" successfully" May 10 00:00:02.341058 containerd[1722]: time="2025-05-10T00:00:02.340750220Z" level=info msg="StopPodSandbox for \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\" returns successfully" May 10 00:00:02.360376 containerd[1722]: time="2025-05-10T00:00:02.341508340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-p7gmj,Uid:8a08f28a-435c-4180-8af5-1b9c1f60f93b,Namespace:calico-apiserver,Attempt:1,}" May 10 00:00:02.342282 systemd[1]: run-netns-cni\x2d80d0edda\x2d626e\x2d0e3a\x2dd241\x2d53af20bf096d.mount: Deactivated successfully. May 10 00:00:02.503539 kubelet[3155]: I0510 00:00:02.503000 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:00:03.310019 systemd-networkd[1507]: cali077cffecf68: Link UP May 10 00:00:03.310945 systemd-networkd[1507]: cali077cffecf68: Gained carrier May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.219 [INFO][5178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0 calico-apiserver-5dc459848f- calico-apiserver 8a08f28a-435c-4180-8af5-1b9c1f60f93b 827 0 2025-05-09 23:59:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc459848f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-84ab9604c4 calico-apiserver-5dc459848f-p7gmj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali077cffecf68 [] []}} ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.219 [INFO][5178] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.249 [INFO][5190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" HandleID="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.262 [INFO][5190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" HandleID="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8ba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-84ab9604c4", "pod":"calico-apiserver-5dc459848f-p7gmj", "timestamp":"2025-05-10 00:00:03.24941619 +0000 UTC"}, Hostname:"ci-4081.3.3-n-84ab9604c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.263 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.263 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.263 [INFO][5190] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-84ab9604c4' May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.265 [INFO][5190] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.270 [INFO][5190] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.275 [INFO][5190] ipam/ipam.go 489: Trying affinity for 192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.278 [INFO][5190] ipam/ipam.go 155: Attempting to load block cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.281 [INFO][5190] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.192/26 host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.281 [INFO][5190] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.192/26 handle="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.283 [INFO][5190] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6 May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.290 [INFO][5190] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.39.192/26 handle="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.303 [INFO][5190] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.39.198/26] block=192.168.39.192/26 handle="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.303 [INFO][5190] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.198/26] handle="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" host="ci-4081.3.3-n-84ab9604c4" May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.304 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:03.339385 containerd[1722]: 2025-05-10 00:00:03.304 [INFO][5190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.198/26] IPv6=[] ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" HandleID="k8s-pod-network.c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:03.341224 containerd[1722]: 2025-05-10 00:00:03.306 [INFO][5178] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a08f28a-435c-4180-8af5-1b9c1f60f93b", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"", Pod:"calico-apiserver-5dc459848f-p7gmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali077cffecf68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:03.341224 containerd[1722]: 2025-05-10 00:00:03.306 [INFO][5178] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.39.198/32] ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:03.341224 containerd[1722]: 2025-05-10 00:00:03.306 [INFO][5178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali077cffecf68 ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:03.341224 containerd[1722]: 2025-05-10 00:00:03.310 [INFO][5178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:03.341224 containerd[1722]: 2025-05-10 00:00:03.310 [INFO][5178] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a08f28a-435c-4180-8af5-1b9c1f60f93b", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6", Pod:"calico-apiserver-5dc459848f-p7gmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali077cffecf68", MAC:"b2:ec:bf:7d:5f:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:03.341224 containerd[1722]: 2025-05-10 00:00:03.336 [INFO][5178] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc459848f-p7gmj" WorkloadEndpoint="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:03.373857 kubelet[3155]: I0510 00:00:03.373244 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:00:03.376507 containerd[1722]: time="2025-05-10T00:00:03.374840764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:03.376507 containerd[1722]: time="2025-05-10T00:00:03.374920844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:03.376507 containerd[1722]: time="2025-05-10T00:00:03.374936524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:03.376507 containerd[1722]: time="2025-05-10T00:00:03.375044884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:03.404989 systemd[1]: run-containerd-runc-k8s.io-c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6-runc.eyV3SW.mount: Deactivated successfully. May 10 00:00:03.424052 systemd[1]: Started cri-containerd-c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6.scope - libcontainer container c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6. May 10 00:00:03.482878 containerd[1722]: time="2025-05-10T00:00:03.482829942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc459848f-p7gmj,Uid:8a08f28a-435c-4180-8af5-1b9c1f60f93b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6\"" May 10 00:00:04.767792 systemd-networkd[1507]: cali077cffecf68: Gained IPv6LL May 10 00:00:13.319309 kubelet[3155]: I0510 00:00:13.319254 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:00:13.428464 systemd[1]: run-containerd-runc-k8s.io-3fec5544c97b5ff89fe2d231f3747d2ca61fd0a508a34a6492034cf978b082b9-runc.KwBVAV.mount: Deactivated successfully. May 10 00:00:14.900875 containerd[1722]: time="2025-05-10T00:00:14.900059179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:14.903521 containerd[1722]: time="2025-05-10T00:00:14.903469858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:00:14.911943 containerd[1722]: time="2025-05-10T00:00:14.911893097Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:14.920025 containerd[1722]: time="2025-05-10T00:00:14.919966975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:14.921870 containerd[1722]: time="2025-05-10T00:00:14.921032974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 14.450710202s" May 10 00:00:14.921870 containerd[1722]: time="2025-05-10T00:00:14.921084414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:00:14.923840 containerd[1722]: time="2025-05-10T00:00:14.923282014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:00:14.927364 containerd[1722]: time="2025-05-10T00:00:14.927191213Z" level=info msg="CreateContainer within sandbox \"bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:00:14.981222 containerd[1722]: time="2025-05-10T00:00:14.981166721Z" level=info msg="CreateContainer within sandbox \"bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1dd89239a69b5e5274e2cb14f50cc80505b45db2057a3d32bbeded697ef43078\"" May 10 00:00:14.983330 containerd[1722]: time="2025-05-10T00:00:14.982089241Z" level=info msg="StartContainer for \"1dd89239a69b5e5274e2cb14f50cc80505b45db2057a3d32bbeded697ef43078\"" May 10 00:00:15.022994 systemd[1]: Started cri-containerd-1dd89239a69b5e5274e2cb14f50cc80505b45db2057a3d32bbeded697ef43078.scope - libcontainer container 1dd89239a69b5e5274e2cb14f50cc80505b45db2057a3d32bbeded697ef43078. May 10 00:00:15.076149 containerd[1722]: time="2025-05-10T00:00:15.076080540Z" level=info msg="StartContainer for \"1dd89239a69b5e5274e2cb14f50cc80505b45db2057a3d32bbeded697ef43078\" returns successfully" May 10 00:00:16.651679 containerd[1722]: time="2025-05-10T00:00:16.650510310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:16.654522 containerd[1722]: time="2025-05-10T00:00:16.654471590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:00:16.663868 containerd[1722]: time="2025-05-10T00:00:16.663805988Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:16.674486 containerd[1722]: time="2025-05-10T00:00:16.674426425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:16.675257 containerd[1722]: time="2025-05-10T00:00:16.675219065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.751891851s" May 10 00:00:16.675257 containerd[1722]: time="2025-05-10T00:00:16.675254505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:00:16.679367 containerd[1722]: time="2025-05-10T00:00:16.679266704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:00:16.680547 containerd[1722]: time="2025-05-10T00:00:16.680333384Z" level=info msg="CreateContainer within sandbox \"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:00:16.695169 kubelet[3155]: I0510 00:00:16.695086 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dc459848f-csr7s" podStartSLOduration=26.939376635 podStartE2EDuration="43.695063981s" podCreationTimestamp="2025-05-09 23:59:33 +0000 UTC" firstStartedPulling="2025-05-09 23:59:58.167236868 +0000 UTC m=+37.053139448" lastFinishedPulling="2025-05-10 00:00:14.922924214 +0000 UTC m=+53.808826794" observedRunningTime="2025-05-10 00:00:15.572235069 +0000 UTC m=+54.458137649" watchObservedRunningTime="2025-05-10 00:00:16.695063981 +0000 UTC m=+55.580966561" May 10 00:00:16.750082 containerd[1722]: time="2025-05-10T00:00:16.750021729Z" level=info msg="CreateContainer within sandbox \"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"035e7c3b34cd5ece2e2a565d9177536bd49072d0b443078015d65b4c97c801f0\"" May 10 00:00:16.750963 containerd[1722]: time="2025-05-10T00:00:16.750925609Z" level=info msg="StartContainer for \"035e7c3b34cd5ece2e2a565d9177536bd49072d0b443078015d65b4c97c801f0\"" May 10 00:00:16.795179 systemd[1]: run-containerd-runc-k8s.io-035e7c3b34cd5ece2e2a565d9177536bd49072d0b443078015d65b4c97c801f0-runc.A3y3Cc.mount: Deactivated successfully. May 10 00:00:16.803896 systemd[1]: Started cri-containerd-035e7c3b34cd5ece2e2a565d9177536bd49072d0b443078015d65b4c97c801f0.scope - libcontainer container 035e7c3b34cd5ece2e2a565d9177536bd49072d0b443078015d65b4c97c801f0. May 10 00:00:16.843804 containerd[1722]: time="2025-05-10T00:00:16.843745828Z" level=info msg="StartContainer for \"035e7c3b34cd5ece2e2a565d9177536bd49072d0b443078015d65b4c97c801f0\" returns successfully" May 10 00:00:18.004531 containerd[1722]: time="2025-05-10T00:00:18.004473616Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:18.008111 containerd[1722]: time="2025-05-10T00:00:18.008058255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 00:00:18.010768 containerd[1722]: time="2025-05-10T00:00:18.010598055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.331285231s" May 10 00:00:18.010768 containerd[1722]: time="2025-05-10T00:00:18.010666415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:00:18.013625 containerd[1722]: time="2025-05-10T00:00:18.013376654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:00:18.014375 containerd[1722]: time="2025-05-10T00:00:18.014311494Z" level=info msg="CreateContainer within sandbox \"c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:00:18.092593 containerd[1722]: time="2025-05-10T00:00:18.092485837Z" level=info msg="CreateContainer within sandbox \"c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4137d5331565e1d9d85a5afc710aac14c033c0a953ef3d0db6f75ce697d66d6f\"" May 10 00:00:18.094048 containerd[1722]: time="2025-05-10T00:00:18.093335037Z" level=info msg="StartContainer for \"4137d5331565e1d9d85a5afc710aac14c033c0a953ef3d0db6f75ce697d66d6f\"" May 10 00:00:18.137821 systemd[1]: run-containerd-runc-k8s.io-4137d5331565e1d9d85a5afc710aac14c033c0a953ef3d0db6f75ce697d66d6f-runc.Uoioty.mount: Deactivated successfully. May 10 00:00:18.145872 systemd[1]: Started cri-containerd-4137d5331565e1d9d85a5afc710aac14c033c0a953ef3d0db6f75ce697d66d6f.scope - libcontainer container 4137d5331565e1d9d85a5afc710aac14c033c0a953ef3d0db6f75ce697d66d6f. May 10 00:00:18.221517 containerd[1722]: time="2025-05-10T00:00:18.221463369Z" level=info msg="StartContainer for \"4137d5331565e1d9d85a5afc710aac14c033c0a953ef3d0db6f75ce697d66d6f\" returns successfully" May 10 00:00:18.583699 kubelet[3155]: I0510 00:00:18.583590 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dc459848f-p7gmj" podStartSLOduration=31.068145134 podStartE2EDuration="45.58356901s" podCreationTimestamp="2025-05-09 23:59:33 +0000 UTC" firstStartedPulling="2025-05-10 00:00:03.496287699 +0000 UTC m=+42.382190279" lastFinishedPulling="2025-05-10 00:00:18.011711575 +0000 UTC m=+56.897614155" observedRunningTime="2025-05-10 00:00:18.58351165 +0000 UTC m=+57.469414230" watchObservedRunningTime="2025-05-10 00:00:18.58356901 +0000 UTC m=+57.469471670" May 10 00:00:19.569004 kubelet[3155]: I0510 00:00:19.568955 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:00:20.016198 containerd[1722]: time="2025-05-10T00:00:20.016140579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:20.020825 containerd[1722]: time="2025-05-10T00:00:20.020773938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:00:20.028860 containerd[1722]: time="2025-05-10T00:00:20.028766296Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:20.037133 containerd[1722]: time="2025-05-10T00:00:20.037023214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:20.038248 containerd[1722]: time="2025-05-10T00:00:20.037740374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 2.02432116s" May 10 00:00:20.038248 containerd[1722]: time="2025-05-10T00:00:20.037784054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:00:20.041711 containerd[1722]: time="2025-05-10T00:00:20.041628893Z" level=info msg="CreateContainer within sandbox \"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:00:20.134371 containerd[1722]: time="2025-05-10T00:00:20.134308473Z" level=info msg="CreateContainer within sandbox \"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e844bea1ef69a23310244dcc45507743ff1cfe4c1cab91a02121e93890a6bf03\"" May 10 00:00:20.135490 containerd[1722]: time="2025-05-10T00:00:20.135304153Z" level=info msg="StartContainer for \"e844bea1ef69a23310244dcc45507743ff1cfe4c1cab91a02121e93890a6bf03\"" May 10 00:00:20.171291 systemd[1]: run-containerd-runc-k8s.io-e844bea1ef69a23310244dcc45507743ff1cfe4c1cab91a02121e93890a6bf03-runc.9vk9J7.mount: Deactivated successfully. May 10 00:00:20.187871 systemd[1]: Started cri-containerd-e844bea1ef69a23310244dcc45507743ff1cfe4c1cab91a02121e93890a6bf03.scope - libcontainer container e844bea1ef69a23310244dcc45507743ff1cfe4c1cab91a02121e93890a6bf03. May 10 00:00:20.235316 containerd[1722]: time="2025-05-10T00:00:20.235233011Z" level=info msg="StartContainer for \"e844bea1ef69a23310244dcc45507743ff1cfe4c1cab91a02121e93890a6bf03\" returns successfully" May 10 00:00:20.400084 kubelet[3155]: I0510 00:00:20.399822 3155 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:00:20.400084 kubelet[3155]: I0510 00:00:20.399877 3155 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:00:21.247119 containerd[1722]: time="2025-05-10T00:00:21.247063111Z" level=info msg="StopPodSandbox for \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\"" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.293 [WARNING][5533] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0", GenerateName:"calico-kube-controllers-9f97fddfb-", Namespace:"calico-system", SelfLink:"", UID:"1563bd12-d559-40ca-9df7-724c26d2d3e7", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f97fddfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49", Pod:"calico-kube-controllers-9f97fddfb-hvskp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib98dafff4f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.293 [INFO][5533] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.293 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" iface="eth0" netns="" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.293 [INFO][5533] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.293 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.316 [INFO][5541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.317 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.317 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.331 [WARNING][5541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.331 [INFO][5541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.333 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.341022 containerd[1722]: 2025-05-10 00:00:21.336 [INFO][5533] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.341022 containerd[1722]: time="2025-05-10T00:00:21.340891571Z" level=info msg="TearDown network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\" successfully" May 10 00:00:21.341022 containerd[1722]: time="2025-05-10T00:00:21.340941811Z" level=info msg="StopPodSandbox for \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\" returns successfully" May 10 00:00:21.342164 containerd[1722]: time="2025-05-10T00:00:21.342121451Z" level=info msg="RemovePodSandbox for \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\"" May 10 00:00:21.342164 containerd[1722]: time="2025-05-10T00:00:21.342164891Z" level=info msg="Forcibly stopping sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\"" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.386 [WARNING][5560] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0", GenerateName:"calico-kube-controllers-9f97fddfb-", Namespace:"calico-system", SelfLink:"", UID:"1563bd12-d559-40ca-9df7-724c26d2d3e7", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f97fddfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"57ccff2ff3abfc2642f894447eed13cd563b3a2313c162a3e9b20e1d14a6cd49", Pod:"calico-kube-controllers-9f97fddfb-hvskp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib98dafff4f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.387 [INFO][5560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.387 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" iface="eth0" netns="" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.387 [INFO][5560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.387 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.407 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.407 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.407 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.416 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.417 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" HandleID="k8s-pod-network.2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--kube--controllers--9f97fddfb--hvskp-eth0" May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.418 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.422101 containerd[1722]: 2025-05-10 00:00:21.420 [INFO][5560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4" May 10 00:00:21.422527 containerd[1722]: time="2025-05-10T00:00:21.422168793Z" level=info msg="TearDown network for sandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\" successfully" May 10 00:00:21.441566 containerd[1722]: time="2025-05-10T00:00:21.441466029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:21.441711 containerd[1722]: time="2025-05-10T00:00:21.441597269Z" level=info msg="RemovePodSandbox \"2925008f5139ce3c515056739d35cc3aa4dfae6b613f4c20dea4a86183ce65d4\" returns successfully" May 10 00:00:21.442539 containerd[1722]: time="2025-05-10T00:00:21.442506029Z" level=info msg="StopPodSandbox for \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\"" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.491 [WARNING][5585] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ff719f3-efa6-438c-9acf-271732122094", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556", Pod:"csi-node-driver-7sw5t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77c1d22eb3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.492 [INFO][5585] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.492 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" iface="eth0" netns="" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.492 [INFO][5585] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.492 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.527 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.527 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.528 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.538 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.538 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.540 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.543664 containerd[1722]: 2025-05-10 00:00:21.542 [INFO][5585] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.544058 containerd[1722]: time="2025-05-10T00:00:21.543623647Z" level=info msg="TearDown network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\" successfully" May 10 00:00:21.544058 containerd[1722]: time="2025-05-10T00:00:21.543681687Z" level=info msg="StopPodSandbox for \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\" returns successfully" May 10 00:00:21.545063 containerd[1722]: time="2025-05-10T00:00:21.544270327Z" level=info msg="RemovePodSandbox for \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\"" May 10 00:00:21.545063 containerd[1722]: time="2025-05-10T00:00:21.544311647Z" level=info msg="Forcibly stopping sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\"" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.590 [WARNING][5614] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ff719f3-efa6-438c-9acf-271732122094", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"a7f226264cb708316c25d9e9b7aeda23b07d6db3921d78d5fca1f5b41b618556", Pod:"csi-node-driver-7sw5t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77c1d22eb3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.590 [INFO][5614] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.590 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" iface="eth0" netns="" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.590 [INFO][5614] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.590 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.611 [INFO][5621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.611 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.611 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.620 [WARNING][5621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.620 [INFO][5621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" HandleID="k8s-pod-network.8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" Workload="ci--4081.3.3--n--84ab9604c4-k8s-csi--node--driver--7sw5t-eth0" May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.622 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.625762 containerd[1722]: 2025-05-10 00:00:21.623 [INFO][5614] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b" May 10 00:00:21.627610 containerd[1722]: time="2025-05-10T00:00:21.626285269Z" level=info msg="TearDown network for sandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\" successfully" May 10 00:00:21.637027 containerd[1722]: time="2025-05-10T00:00:21.636980387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:21.637264 containerd[1722]: time="2025-05-10T00:00:21.637242947Z" level=info msg="RemovePodSandbox \"8e403b09f1fdec814a2d95f3ae31d19607ac9f00d622d98cd54eac579a926f7b\" returns successfully" May 10 00:00:21.637862 containerd[1722]: time="2025-05-10T00:00:21.637836226Z" level=info msg="StopPodSandbox for \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\"" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.682 [WARNING][5639] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c6108b02-0f6c-4322-bab5-8d4e33e79daf", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048", Pod:"coredns-668d6bf9bc-nrnrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif66e6889ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.682 [INFO][5639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.682 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" iface="eth0" netns="" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.682 [INFO][5639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.682 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.702 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.702 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.702 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.712 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.712 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.714 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.717581 containerd[1722]: 2025-05-10 00:00:21.715 [INFO][5639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.718276 containerd[1722]: time="2025-05-10T00:00:21.717627609Z" level=info msg="TearDown network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\" successfully" May 10 00:00:21.718276 containerd[1722]: time="2025-05-10T00:00:21.717689769Z" level=info msg="StopPodSandbox for \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\" returns successfully" May 10 00:00:21.718996 containerd[1722]: time="2025-05-10T00:00:21.718542769Z" level=info msg="RemovePodSandbox for \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\"" May 10 00:00:21.718996 containerd[1722]: time="2025-05-10T00:00:21.718588649Z" level=info msg="Forcibly stopping sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\"" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.762 [WARNING][5664] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c6108b02-0f6c-4322-bab5-8d4e33e79daf", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"a67d126add62f1aa046bc4270b68b12a340c0bef483fa1ef5e73fb304dcc1048", Pod:"coredns-668d6bf9bc-nrnrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif66e6889ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.762 [INFO][5664] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.762 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" iface="eth0" netns="" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.762 [INFO][5664] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.762 [INFO][5664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.782 [INFO][5671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.782 [INFO][5671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.782 [INFO][5671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.791 [WARNING][5671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.791 [INFO][5671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" HandleID="k8s-pod-network.3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--nrnrs-eth0" May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.793 [INFO][5671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.796868 containerd[1722]: 2025-05-10 00:00:21.795 [INFO][5664] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229" May 10 00:00:21.798430 containerd[1722]: time="2025-05-10T00:00:21.797720392Z" level=info msg="TearDown network for sandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\" successfully" May 10 00:00:21.810696 containerd[1722]: time="2025-05-10T00:00:21.810461869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:21.810696 containerd[1722]: time="2025-05-10T00:00:21.810548309Z" level=info msg="RemovePodSandbox \"3473a0eb1f0d8b05a967351ffb3f3243fc4103f1b0e8d2d9ba81e1ba7c161229\" returns successfully" May 10 00:00:21.811361 containerd[1722]: time="2025-05-10T00:00:21.811060709Z" level=info msg="StopPodSandbox for \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\"" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.852 [WARNING][5689] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbe36fb6-41bc-4acd-b726-7ca42d2518b3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a", Pod:"calico-apiserver-5dc459848f-csr7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39e19212d44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.853 [INFO][5689] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.853 [INFO][5689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" iface="eth0" netns="" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.853 [INFO][5689] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.853 [INFO][5689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.874 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.874 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.874 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.885 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.886 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.889 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.893175 containerd[1722]: 2025-05-10 00:00:21.890 [INFO][5689] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.894042 containerd[1722]: time="2025-05-10T00:00:21.893706211Z" level=info msg="TearDown network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\" successfully" May 10 00:00:21.894042 containerd[1722]: time="2025-05-10T00:00:21.893741491Z" level=info msg="StopPodSandbox for \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\" returns successfully" May 10 00:00:21.895969 containerd[1722]: time="2025-05-10T00:00:21.895342890Z" level=info msg="RemovePodSandbox for \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\"" May 10 00:00:21.895969 containerd[1722]: time="2025-05-10T00:00:21.895391930Z" level=info msg="Forcibly stopping sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\"" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.935 [WARNING][5715] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbe36fb6-41bc-4acd-b726-7ca42d2518b3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"bc4891a148e5df9919dae397bc6ae4554dd18a0edba8485563d3c43c7dfdda9a", Pod:"calico-apiserver-5dc459848f-csr7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39e19212d44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.935 [INFO][5715] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.935 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" iface="eth0" netns="" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.935 [INFO][5715] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.935 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.958 [INFO][5722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.958 [INFO][5722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.958 [INFO][5722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.968 [WARNING][5722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.968 [INFO][5722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" HandleID="k8s-pod-network.fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--csr7s-eth0" May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.970 [INFO][5722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:21.973321 containerd[1722]: 2025-05-10 00:00:21.971 [INFO][5715] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9" May 10 00:00:21.973777 containerd[1722]: time="2025-05-10T00:00:21.973392233Z" level=info msg="TearDown network for sandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\" successfully" May 10 00:00:21.985785 containerd[1722]: time="2025-05-10T00:00:21.985726751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:21.985915 containerd[1722]: time="2025-05-10T00:00:21.985816111Z" level=info msg="RemovePodSandbox \"fc2923c6730ff0273eae09bfc95e80f956588606f9c389cad1ff991ea5431eb9\" returns successfully" May 10 00:00:21.986727 containerd[1722]: time="2025-05-10T00:00:21.986694151Z" level=info msg="StopPodSandbox for \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\"" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.038 [WARNING][5740] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6350e5e-ccbb-494f-aa01-4897633a5f14", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e", Pod:"coredns-668d6bf9bc-55grc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98c5dbbe03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.038 [INFO][5740] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.038 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" iface="eth0" netns="" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.038 [INFO][5740] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.038 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.063 [INFO][5747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.063 [INFO][5747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.063 [INFO][5747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.073 [WARNING][5747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.073 [INFO][5747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.075 [INFO][5747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:22.078477 containerd[1722]: 2025-05-10 00:00:22.076 [INFO][5740] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.078477 containerd[1722]: time="2025-05-10T00:00:22.078324931Z" level=info msg="TearDown network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\" successfully" May 10 00:00:22.078477 containerd[1722]: time="2025-05-10T00:00:22.078371171Z" level=info msg="StopPodSandbox for \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\" returns successfully" May 10 00:00:22.079968 containerd[1722]: time="2025-05-10T00:00:22.079576570Z" level=info msg="RemovePodSandbox for \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\"" May 10 00:00:22.079968 containerd[1722]: time="2025-05-10T00:00:22.079617250Z" level=info msg="Forcibly stopping sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\"" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.119 [WARNING][5765] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6350e5e-ccbb-494f-aa01-4897633a5f14", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"609e52fbd1163a1bcd5addebd3054dbb92de6ff0aa4cac14104941e2d0901d1e", Pod:"coredns-668d6bf9bc-55grc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98c5dbbe03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.119 [INFO][5765] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.119 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" iface="eth0" netns="" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.119 [INFO][5765] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.119 [INFO][5765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.154 [INFO][5772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.154 [INFO][5772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.154 [INFO][5772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.168 [WARNING][5772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.168 [INFO][5772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" HandleID="k8s-pod-network.242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" Workload="ci--4081.3.3--n--84ab9604c4-k8s-coredns--668d6bf9bc--55grc-eth0" May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.170 [INFO][5772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:22.176905 containerd[1722]: 2025-05-10 00:00:22.175 [INFO][5765] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1" May 10 00:00:22.177437 containerd[1722]: time="2025-05-10T00:00:22.176951469Z" level=info msg="TearDown network for sandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\" successfully" May 10 00:00:22.188091 containerd[1722]: time="2025-05-10T00:00:22.188032547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:22.188239 containerd[1722]: time="2025-05-10T00:00:22.188123667Z" level=info msg="RemovePodSandbox \"242ef02f90d5ef3e180dda60d40b5eb17634fa45872fc9c3a65d3a79f56817c1\" returns successfully" May 10 00:00:22.192292 containerd[1722]: time="2025-05-10T00:00:22.191968586Z" level=info msg="StopPodSandbox for \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\"" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.237 [WARNING][5790] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a08f28a-435c-4180-8af5-1b9c1f60f93b", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6", Pod:"calico-apiserver-5dc459848f-p7gmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali077cffecf68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.237 [INFO][5790] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.237 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" iface="eth0" netns="" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.237 [INFO][5790] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.237 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.259 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.260 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.260 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.269 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.269 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.271 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:22.277356 containerd[1722]: 2025-05-10 00:00:22.275 [INFO][5790] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.278588 containerd[1722]: time="2025-05-10T00:00:22.277402767Z" level=info msg="TearDown network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\" successfully" May 10 00:00:22.278588 containerd[1722]: time="2025-05-10T00:00:22.277432447Z" level=info msg="StopPodSandbox for \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\" returns successfully" May 10 00:00:22.278588 containerd[1722]: time="2025-05-10T00:00:22.277952127Z" level=info msg="RemovePodSandbox for \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\"" May 10 00:00:22.278588 containerd[1722]: time="2025-05-10T00:00:22.277983487Z" level=info msg="Forcibly stopping sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\"" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.329 [WARNING][5815] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0", GenerateName:"calico-apiserver-5dc459848f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a08f28a-435c-4180-8af5-1b9c1f60f93b", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc459848f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-84ab9604c4", ContainerID:"c95283f1235515426d8d247795b442668315708f4346e4fca990dbeec152d7e6", Pod:"calico-apiserver-5dc459848f-p7gmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali077cffecf68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.329 [INFO][5815] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.329 [INFO][5815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" iface="eth0" netns="" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.329 [INFO][5815] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.329 [INFO][5815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.349 [INFO][5822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.349 [INFO][5822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.349 [INFO][5822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.358 [WARNING][5822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.358 [INFO][5822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" HandleID="k8s-pod-network.db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" Workload="ci--4081.3.3--n--84ab9604c4-k8s-calico--apiserver--5dc459848f--p7gmj-eth0" May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.360 [INFO][5822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:00:22.363472 containerd[1722]: 2025-05-10 00:00:22.361 [INFO][5815] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948" May 10 00:00:22.363472 containerd[1722]: time="2025-05-10T00:00:22.363423749Z" level=info msg="TearDown network for sandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\" successfully" May 10 00:00:22.378172 containerd[1722]: time="2025-05-10T00:00:22.378115385Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:22.378373 containerd[1722]: time="2025-05-10T00:00:22.378196545Z" level=info msg="RemovePodSandbox \"db852a19600083e44a5687838e369902f8f4cf0b42f4491236cc1aa8c01cc948\" returns successfully" May 10 00:00:31.528347 kubelet[3155]: I0510 00:00:31.528274 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:00:31.566714 kubelet[3155]: I0510 00:00:31.565503 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7sw5t" podStartSLOduration=34.719039743 podStartE2EDuration="56.565483014s" podCreationTimestamp="2025-05-09 23:59:35 +0000 UTC" firstStartedPulling="2025-05-09 23:59:58.192737703 +0000 UTC m=+37.078640243" lastFinishedPulling="2025-05-10 00:00:20.039180934 +0000 UTC m=+58.925083514" observedRunningTime="2025-05-10 00:00:20.590801414 +0000 UTC m=+59.476703994" watchObservedRunningTime="2025-05-10 00:00:31.565483014 +0000 UTC m=+70.451385594" May 10 00:00:57.613414 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:38816.service - OpenSSH per-connection server daemon (10.200.16.10:38816). May 10 00:00:58.068290 sshd[5912]: Accepted publickey for core from 10.200.16.10 port 38816 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:00:58.070572 sshd[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:58.076514 systemd-logind[1686]: New session 10 of user core. May 10 00:00:58.081874 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:00:58.489917 sshd[5912]: pam_unix(sshd:session): session closed for user core May 10 00:00:58.494083 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:38816.service: Deactivated successfully. May 10 00:00:58.497682 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:00:58.500088 systemd-logind[1686]: Session 10 logged out. Waiting for processes to exit. May 10 00:00:58.501251 systemd-logind[1686]: Removed session 10. May 10 00:01:03.581601 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:42654.service - OpenSSH per-connection server daemon (10.200.16.10:42654). May 10 00:01:04.027359 sshd[5950]: Accepted publickey for core from 10.200.16.10 port 42654 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:04.029016 sshd[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:04.033279 systemd-logind[1686]: New session 11 of user core. May 10 00:01:04.043821 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:01:04.418928 sshd[5950]: pam_unix(sshd:session): session closed for user core May 10 00:01:04.422933 systemd-logind[1686]: Session 11 logged out. Waiting for processes to exit. May 10 00:01:04.423519 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:42654.service: Deactivated successfully. May 10 00:01:04.426745 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:01:04.428581 systemd-logind[1686]: Removed session 11. May 10 00:01:09.502978 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:52440.service - OpenSSH per-connection server daemon (10.200.16.10:52440). May 10 00:01:09.911568 sshd[5964]: Accepted publickey for core from 10.200.16.10 port 52440 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:09.913018 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:09.917501 systemd-logind[1686]: New session 12 of user core. May 10 00:01:09.921822 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:01:10.295947 sshd[5964]: pam_unix(sshd:session): session closed for user core May 10 00:01:10.299983 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:52440.service: Deactivated successfully. May 10 00:01:10.302692 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:01:10.304500 systemd-logind[1686]: Session 12 logged out. Waiting for processes to exit. May 10 00:01:10.306965 systemd-logind[1686]: Removed session 12. May 10 00:01:15.388010 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:52444.service - OpenSSH per-connection server daemon (10.200.16.10:52444). May 10 00:01:15.833511 sshd[6003]: Accepted publickey for core from 10.200.16.10 port 52444 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:15.835019 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:15.839281 systemd-logind[1686]: New session 13 of user core. May 10 00:01:15.844817 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:01:16.231378 sshd[6003]: pam_unix(sshd:session): session closed for user core May 10 00:01:16.235513 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:52444.service: Deactivated successfully. May 10 00:01:16.237863 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:01:16.239124 systemd-logind[1686]: Session 13 logged out. Waiting for processes to exit. May 10 00:01:16.240187 systemd-logind[1686]: Removed session 13. May 10 00:01:16.318133 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:52450.service - OpenSSH per-connection server daemon (10.200.16.10:52450). May 10 00:01:16.743840 sshd[6016]: Accepted publickey for core from 10.200.16.10 port 52450 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:16.745347 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:16.750090 systemd-logind[1686]: New session 14 of user core. May 10 00:01:16.758848 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:01:17.164621 sshd[6016]: pam_unix(sshd:session): session closed for user core May 10 00:01:17.169287 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:52450.service: Deactivated successfully. May 10 00:01:17.171952 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:01:17.172961 systemd-logind[1686]: Session 14 logged out. Waiting for processes to exit. May 10 00:01:17.174587 systemd-logind[1686]: Removed session 14. May 10 00:01:17.267507 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:52452.service - OpenSSH per-connection server daemon (10.200.16.10:52452). May 10 00:01:17.708661 sshd[6028]: Accepted publickey for core from 10.200.16.10 port 52452 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:17.710250 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:17.714721 systemd-logind[1686]: New session 15 of user core. May 10 00:01:17.722830 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:01:18.097694 sshd[6028]: pam_unix(sshd:session): session closed for user core May 10 00:01:18.101450 systemd-logind[1686]: Session 15 logged out. Waiting for processes to exit. May 10 00:01:18.102231 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:52452.service: Deactivated successfully. May 10 00:01:18.105603 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:01:18.107129 systemd-logind[1686]: Removed session 15. May 10 00:01:23.181973 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:37518.service - OpenSSH per-connection server daemon (10.200.16.10:37518). May 10 00:01:23.597579 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 37518 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:23.599199 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:23.604251 systemd-logind[1686]: New session 16 of user core. May 10 00:01:23.615860 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:01:23.977658 sshd[6056]: pam_unix(sshd:session): session closed for user core May 10 00:01:23.981969 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:37518.service: Deactivated successfully. May 10 00:01:23.982086 systemd-logind[1686]: Session 16 logged out. Waiting for processes to exit. May 10 00:01:23.985347 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:01:23.986695 systemd-logind[1686]: Removed session 16. May 10 00:01:29.063312 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:53996.service - OpenSSH per-connection server daemon (10.200.16.10:53996). May 10 00:01:29.521566 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 53996 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:29.525382 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:29.533156 systemd-logind[1686]: New session 17 of user core. May 10 00:01:29.542264 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:01:29.920951 sshd[6073]: pam_unix(sshd:session): session closed for user core May 10 00:01:29.924502 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:53996.service: Deactivated successfully. May 10 00:01:29.927370 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:01:29.929726 systemd-logind[1686]: Session 17 logged out. Waiting for processes to exit. May 10 00:01:29.931690 systemd-logind[1686]: Removed session 17. May 10 00:01:33.524545 systemd[1]: run-containerd-runc-k8s.io-09e00de07aa88963e77cab300e9aaf4bc340f1e5d55dbe39f2f9f009667678ed-runc.p5ADx3.mount: Deactivated successfully. May 10 00:01:35.005993 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:54012.service - OpenSSH per-connection server daemon (10.200.16.10:54012). May 10 00:01:35.422710 sshd[6116]: Accepted publickey for core from 10.200.16.10 port 54012 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:35.424396 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:35.429667 systemd-logind[1686]: New session 18 of user core. May 10 00:01:35.434859 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:01:35.807698 sshd[6116]: pam_unix(sshd:session): session closed for user core May 10 00:01:35.811847 systemd-logind[1686]: Session 18 logged out. Waiting for processes to exit. May 10 00:01:35.812602 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:54012.service: Deactivated successfully. May 10 00:01:35.815818 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:01:35.817398 systemd-logind[1686]: Removed session 18. May 10 00:01:40.894977 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:42942.service - OpenSSH per-connection server daemon (10.200.16.10:42942). May 10 00:01:41.305348 sshd[6129]: Accepted publickey for core from 10.200.16.10 port 42942 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:41.306950 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:41.311876 systemd-logind[1686]: New session 19 of user core. May 10 00:01:41.315865 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:01:41.861908 sshd[6129]: pam_unix(sshd:session): session closed for user core May 10 00:01:41.865876 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:42942.service: Deactivated successfully. May 10 00:01:41.868195 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:01:41.869239 systemd-logind[1686]: Session 19 logged out. Waiting for processes to exit. May 10 00:01:41.870268 systemd-logind[1686]: Removed session 19. May 10 00:01:46.950970 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:42948.service - OpenSSH per-connection server daemon (10.200.16.10:42948). May 10 00:01:47.394954 sshd[6182]: Accepted publickey for core from 10.200.16.10 port 42948 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:47.396628 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:47.401121 systemd-logind[1686]: New session 20 of user core. May 10 00:01:47.409904 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:01:47.785532 sshd[6182]: pam_unix(sshd:session): session closed for user core May 10 00:01:47.790013 systemd-logind[1686]: Session 20 logged out. Waiting for processes to exit. May 10 00:01:47.790891 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:42948.service: Deactivated successfully. May 10 00:01:47.793768 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:01:47.795312 systemd-logind[1686]: Removed session 20. May 10 00:01:47.869984 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:42964.service - OpenSSH per-connection server daemon (10.200.16.10:42964). May 10 00:01:48.313617 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 42964 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:48.317594 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:48.327399 systemd-logind[1686]: New session 21 of user core. May 10 00:01:48.335840 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:01:48.782689 sshd[6195]: pam_unix(sshd:session): session closed for user core May 10 00:01:48.787007 systemd-logind[1686]: Session 21 logged out. Waiting for processes to exit. May 10 00:01:48.787583 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:42964.service: Deactivated successfully. May 10 00:01:48.791245 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:01:48.793370 systemd-logind[1686]: Removed session 21. May 10 00:01:48.863984 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:42974.service - OpenSSH per-connection server daemon (10.200.16.10:42974). May 10 00:01:49.308298 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 42974 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:49.310164 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:49.318818 systemd-logind[1686]: New session 22 of user core. May 10 00:01:49.326873 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:01:50.876436 sshd[6206]: pam_unix(sshd:session): session closed for user core May 10 00:01:50.880864 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:42974.service: Deactivated successfully. May 10 00:01:50.883594 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:01:50.884860 systemd-logind[1686]: Session 22 logged out. Waiting for processes to exit. May 10 00:01:50.886261 systemd-logind[1686]: Removed session 22. May 10 00:01:50.968953 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:53150.service - OpenSSH per-connection server daemon (10.200.16.10:53150). May 10 00:01:51.414303 sshd[6237]: Accepted publickey for core from 10.200.16.10 port 53150 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:51.415990 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:51.420282 systemd-logind[1686]: New session 23 of user core. May 10 00:01:51.427847 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:01:51.945239 sshd[6237]: pam_unix(sshd:session): session closed for user core May 10 00:01:51.949728 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:53150.service: Deactivated successfully. May 10 00:01:51.952205 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:01:51.953121 systemd-logind[1686]: Session 23 logged out. Waiting for processes to exit. May 10 00:01:51.954463 systemd-logind[1686]: Removed session 23. May 10 00:01:52.029020 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:53152.service - OpenSSH per-connection server daemon (10.200.16.10:53152). May 10 00:01:52.440605 sshd[6249]: Accepted publickey for core from 10.200.16.10 port 53152 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:52.442217 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:52.446690 systemd-logind[1686]: New session 24 of user core. May 10 00:01:52.455818 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:01:52.815631 sshd[6249]: pam_unix(sshd:session): session closed for user core May 10 00:01:52.818602 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:53152.service: Deactivated successfully. May 10 00:01:52.821014 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:01:52.822899 systemd-logind[1686]: Session 24 logged out. Waiting for processes to exit. May 10 00:01:52.824072 systemd-logind[1686]: Removed session 24. May 10 00:01:57.896026 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:53160.service - OpenSSH per-connection server daemon (10.200.16.10:53160). May 10 00:01:58.307943 sshd[6264]: Accepted publickey for core from 10.200.16.10 port 53160 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:58.309536 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:58.316992 systemd-logind[1686]: New session 25 of user core. May 10 00:01:58.317876 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:01:58.674938 sshd[6264]: pam_unix(sshd:session): session closed for user core May 10 00:01:58.678893 systemd-logind[1686]: Session 25 logged out. Waiting for processes to exit. May 10 00:01:58.680153 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:53160.service: Deactivated successfully. May 10 00:01:58.682508 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:01:58.684316 systemd-logind[1686]: Removed session 25. May 10 00:02:03.756950 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:49886.service - OpenSSH per-connection server daemon (10.200.16.10:49886). May 10 00:02:04.170346 sshd[6302]: Accepted publickey for core from 10.200.16.10 port 49886 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:02:04.171889 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:04.176184 systemd-logind[1686]: New session 26 of user core. May 10 00:02:04.184838 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 00:02:04.533905 sshd[6302]: pam_unix(sshd:session): session closed for user core May 10 00:02:04.537471 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:49886.service: Deactivated successfully. May 10 00:02:04.539906 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:02:04.543367 systemd-logind[1686]: Session 26 logged out. Waiting for processes to exit. May 10 00:02:04.544441 systemd-logind[1686]: Removed session 26. May 10 00:02:09.618992 systemd[1]: Started sshd@24-10.200.20.38:22-10.200.16.10:39376.service - OpenSSH per-connection server daemon (10.200.16.10:39376). May 10 00:02:10.032069 sshd[6314]: Accepted publickey for core from 10.200.16.10 port 39376 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:02:10.033719 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:10.038858 systemd-logind[1686]: New session 27 of user core. May 10 00:02:10.048860 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 00:02:10.413453 sshd[6314]: pam_unix(sshd:session): session closed for user core May 10 00:02:10.416697 systemd-logind[1686]: Session 27 logged out. Waiting for processes to exit. May 10 00:02:10.417392 systemd[1]: sshd@24-10.200.20.38:22-10.200.16.10:39376.service: Deactivated successfully. May 10 00:02:10.420033 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:02:10.422571 systemd-logind[1686]: Removed session 27. May 10 00:02:15.493049 systemd[1]: Started sshd@25-10.200.20.38:22-10.200.16.10:39382.service - OpenSSH per-connection server daemon (10.200.16.10:39382). May 10 00:02:15.903540 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 39382 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:02:15.905012 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:15.909426 systemd-logind[1686]: New session 28 of user core. May 10 00:02:15.911844 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 00:02:16.273186 sshd[6346]: pam_unix(sshd:session): session closed for user core May 10 00:02:16.277432 systemd[1]: sshd@25-10.200.20.38:22-10.200.16.10:39382.service: Deactivated successfully. May 10 00:02:16.279627 systemd[1]: session-28.scope: Deactivated successfully. May 10 00:02:16.280685 systemd-logind[1686]: Session 28 logged out. Waiting for processes to exit. May 10 00:02:16.282313 systemd-logind[1686]: Removed session 28. May 10 00:02:21.356955 systemd[1]: Started sshd@26-10.200.20.38:22-10.200.16.10:43706.service - OpenSSH per-connection server daemon (10.200.16.10:43706). May 10 00:02:21.768662 sshd[6361]: Accepted publickey for core from 10.200.16.10 port 43706 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:02:21.770518 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:21.775042 systemd-logind[1686]: New session 29 of user core. May 10 00:02:21.780878 systemd[1]: Started session-29.scope - Session 29 of User core. May 10 00:02:22.157467 sshd[6361]: pam_unix(sshd:session): session closed for user core May 10 00:02:22.162018 systemd[1]: sshd@26-10.200.20.38:22-10.200.16.10:43706.service: Deactivated successfully. May 10 00:02:22.165479 systemd[1]: session-29.scope: Deactivated successfully. May 10 00:02:22.168509 systemd-logind[1686]: Session 29 logged out. Waiting for processes to exit. May 10 00:02:22.169838 systemd-logind[1686]: Removed session 29. May 10 00:02:27.238977 systemd[1]: Started sshd@27-10.200.20.38:22-10.200.16.10:43716.service - OpenSSH per-connection server daemon (10.200.16.10:43716). May 10 00:02:27.688767 sshd[6375]: Accepted publickey for core from 10.200.16.10 port 43716 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:02:27.691057 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:27.695411 systemd-logind[1686]: New session 30 of user core. May 10 00:02:27.706865 systemd[1]: Started session-30.scope - Session 30 of User core. May 10 00:02:28.077522 sshd[6375]: pam_unix(sshd:session): session closed for user core May 10 00:02:28.082006 systemd[1]: sshd@27-10.200.20.38:22-10.200.16.10:43716.service: Deactivated successfully. May 10 00:02:28.085202 systemd[1]: session-30.scope: Deactivated successfully. May 10 00:02:28.086687 systemd-logind[1686]: Session 30 logged out. Waiting for processes to exit. May 10 00:02:28.087869 systemd-logind[1686]: Removed session 30.