May 13 23:41:53.409822 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:41:53.409844 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:41:53.409852 kernel: KASLR enabled May 13 23:41:53.409858 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 13 23:41:53.409866 kernel: printk: bootconsole [pl11] enabled May 13 23:41:53.409871 kernel: efi: EFI v2.7 by EDK II May 13 23:41:53.409878 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 13 23:41:53.409884 kernel: random: crng init done May 13 23:41:53.409889 kernel: secureboot: Secure boot disabled May 13 23:41:53.409895 kernel: ACPI: Early table checksum verification disabled May 13 23:41:53.409901 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 13 23:41:53.409907 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409913 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409920 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 13 23:41:53.409928 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409933 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409940 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409947 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409954 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409960 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409966 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 13 23:41:53.409972 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.409978 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 13 23:41:53.409985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 13 23:41:53.409991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 13 23:41:53.409997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 13 23:41:53.410008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 13 23:41:53.410024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 13 23:41:53.410032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 13 23:41:53.410039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 13 23:41:53.410045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 13 23:41:53.410051 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 13 23:41:53.410057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 13 23:41:53.410064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 13 23:41:53.410070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 13 23:41:53.410076 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 13 23:41:53.410082 kernel: Zone ranges: May 13 23:41:53.410088 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 13 23:41:53.410094 kernel: DMA32 empty May 13 23:41:53.410100 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 13 23:41:53.410111 kernel: Movable zone start for each node May 13 23:41:53.410117 kernel: Early memory node ranges May 13 23:41:53.410123 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 13 23:41:53.410130 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 13 23:41:53.410137 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 13 23:41:53.410145 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 13 23:41:53.410151 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 13 23:41:53.410158 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 13 23:41:53.410165 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 13 23:41:53.410171 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 13 23:41:53.410178 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 13 23:41:53.410184 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 13 23:41:53.410191 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 13 23:41:53.410197 kernel: psci: probing for conduit method from ACPI. May 13 23:41:53.410204 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:41:53.410210 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:41:53.410217 kernel: psci: MIGRATE_INFO_TYPE not supported. May 13 23:41:53.410224 kernel: psci: SMC Calling Convention v1.4 May 13 23:41:53.410231 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 13 23:41:53.410237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 13 23:41:53.410244 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:41:53.410250 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:41:53.410257 kernel: pcpu-alloc: [0] 0 [0] 1 May 13 23:41:53.412296 kernel: Detected PIPT I-cache on CPU0 May 13 23:41:53.412323 kernel: CPU features: detected: GIC system register CPU interface May 13 23:41:53.412331 kernel: CPU features: detected: Hardware dirty bit management May 13 23:41:53.412338 kernel: CPU features: detected: Spectre-BHB May 13 23:41:53.412344 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:41:53.412356 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:41:53.412363 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:41:53.412369 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 13 23:41:53.412376 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:41:53.412382 kernel: alternatives: applying boot alternatives May 13 23:41:53.412391 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:41:53.412398 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:41:53.412405 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:41:53.412411 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:41:53.412418 kernel: Fallback order for Node 0: 0 May 13 23:41:53.412425 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 13 23:41:53.412433 kernel: Policy zone: Normal May 13 23:41:53.412440 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:41:53.412446 kernel: software IO TLB: area num 2. May 13 23:41:53.412453 kernel: software IO TLB: mapped [mem 0x0000000036520000-0x000000003a520000] (64MB) May 13 23:41:53.412459 kernel: Memory: 3983464K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 210696K reserved, 0K cma-reserved) May 13 23:41:53.412466 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:41:53.412473 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:41:53.412480 kernel: rcu: RCU event tracing is enabled. May 13 23:41:53.412487 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:41:53.412493 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:41:53.412500 kernel: Tracing variant of Tasks RCU enabled. May 13 23:41:53.412508 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:41:53.412515 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:41:53.412522 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:41:53.412528 kernel: GICv3: 960 SPIs implemented May 13 23:41:53.412535 kernel: GICv3: 0 Extended SPIs implemented May 13 23:41:53.412541 kernel: Root IRQ handler: gic_handle_irq May 13 23:41:53.412548 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:41:53.412554 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 13 23:41:53.412561 kernel: ITS: No ITS available, not enabling LPIs May 13 23:41:53.412568 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:41:53.412574 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:41:53.412581 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:41:53.412589 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:41:53.412596 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:41:53.412602 kernel: Console: colour dummy device 80x25 May 13 23:41:53.412609 kernel: printk: console [tty1] enabled May 13 23:41:53.412616 kernel: ACPI: Core revision 20230628 May 13 23:41:53.412623 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:41:53.412630 kernel: pid_max: default: 32768 minimum: 301 May 13 23:41:53.412637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:41:53.412644 kernel: landlock: Up and running. May 13 23:41:53.412652 kernel: SELinux: Initializing. May 13 23:41:53.412659 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:41:53.412665 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:41:53.412672 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:41:53.412679 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:41:53.412686 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 13 23:41:53.412693 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 13 23:41:53.412706 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 13 23:41:53.412713 kernel: rcu: Hierarchical SRCU implementation. May 13 23:41:53.412720 kernel: rcu: Max phase no-delay instances is 400. May 13 23:41:53.412728 kernel: Remapping and enabling EFI services. May 13 23:41:53.412737 kernel: smp: Bringing up secondary CPUs ... May 13 23:41:53.412747 kernel: Detected PIPT I-cache on CPU1 May 13 23:41:53.412756 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 13 23:41:53.412764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:41:53.412772 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:41:53.412780 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:41:53.412791 kernel: SMP: Total of 2 processors activated. May 13 23:41:53.412799 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:41:53.412808 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 13 23:41:53.412816 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:41:53.412825 kernel: CPU features: detected: CRC32 instructions May 13 23:41:53.412832 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:41:53.412841 kernel: CPU features: detected: LSE atomic instructions May 13 23:41:53.412849 kernel: CPU features: detected: Privileged Access Never May 13 23:41:53.412858 kernel: CPU: All CPU(s) started at EL1 May 13 23:41:53.412868 kernel: alternatives: applying system-wide alternatives May 13 23:41:53.412876 kernel: devtmpfs: initialized May 13 23:41:53.412884 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:41:53.412893 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:41:53.412900 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:41:53.412907 kernel: SMBIOS 3.1.0 present. May 13 23:41:53.412914 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 13 23:41:53.412921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:41:53.412929 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:41:53.412938 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:41:53.412945 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:41:53.412953 kernel: audit: initializing netlink subsys (disabled) May 13 23:41:53.412961 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 13 23:41:53.412969 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:41:53.412978 kernel: cpuidle: using governor menu May 13 23:41:53.412986 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:41:53.412994 kernel: ASID allocator initialised with 32768 entries May 13 23:41:53.413002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:41:53.413013 kernel: Serial: AMBA PL011 UART driver May 13 23:41:53.413021 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:41:53.413029 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:41:53.413038 kernel: Modules: 509232 pages in range for PLT usage May 13 23:41:53.413046 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:41:53.413054 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:41:53.413061 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:41:53.413068 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:41:53.413075 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:41:53.413084 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:41:53.413091 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:41:53.413098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:41:53.413105 kernel: ACPI: Added _OSI(Module Device) May 13 23:41:53.413112 kernel: ACPI: Added _OSI(Processor Device) May 13 23:41:53.413119 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:41:53.413126 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:41:53.413132 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:41:53.413140 kernel: ACPI: Interpreter enabled May 13 23:41:53.413148 kernel: ACPI: Using GIC for interrupt routing May 13 23:41:53.413156 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 13 23:41:53.413163 kernel: printk: console [ttyAMA0] enabled May 13 23:41:53.413170 kernel: printk: bootconsole [pl11] disabled May 13 23:41:53.413177 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 13 23:41:53.413184 kernel: iommu: Default domain type: Translated May 13 23:41:53.413191 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:41:53.413198 kernel: efivars: Registered efivars operations May 13 23:41:53.413205 kernel: vgaarb: loaded May 13 23:41:53.413214 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:41:53.413221 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:41:53.413228 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:41:53.413235 kernel: pnp: PnP ACPI init May 13 23:41:53.413242 kernel: pnp: PnP ACPI: found 0 devices May 13 23:41:53.413255 kernel: NET: Registered PF_INET protocol family May 13 23:41:53.413272 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:41:53.413281 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:41:53.413288 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:41:53.413298 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:41:53.413305 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:41:53.413312 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:41:53.413319 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:41:53.413326 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:41:53.413334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:41:53.413340 kernel: PCI: CLS 0 bytes, default 64 May 13 23:41:53.413347 kernel: kvm [1]: HYP mode not available May 13 23:41:53.413354 kernel: Initialise system trusted keyrings May 13 23:41:53.413363 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:41:53.413370 kernel: Key type asymmetric registered May 13 23:41:53.413377 kernel: Asymmetric key parser 'x509' registered May 13 23:41:53.413384 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:41:53.413391 kernel: io scheduler mq-deadline registered May 13 23:41:53.413398 kernel: io scheduler kyber registered May 13 23:41:53.413405 kernel: io scheduler bfq registered May 13 23:41:53.413412 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:41:53.413419 kernel: thunder_xcv, ver 1.0 May 13 23:41:53.413427 kernel: thunder_bgx, ver 1.0 May 13 23:41:53.413434 kernel: nicpf, ver 1.0 May 13 23:41:53.413441 kernel: nicvf, ver 1.0 May 13 23:41:53.413600 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:41:53.413672 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:41:52 UTC (1747179712) May 13 23:41:53.413683 kernel: efifb: probing for efifb May 13 23:41:53.413690 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 13 23:41:53.413697 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 13 23:41:53.413706 kernel: efifb: scrolling: redraw May 13 23:41:53.413713 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:41:53.413721 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:41:53.413728 kernel: fb0: EFI VGA frame buffer device May 13 23:41:53.413735 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 13 23:41:53.413742 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:41:53.413749 kernel: No ACPI PMU IRQ for CPU0 May 13 23:41:53.413756 kernel: No ACPI PMU IRQ for CPU1 May 13 23:41:53.413763 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 13 23:41:53.413771 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:41:53.413778 kernel: watchdog: Hard watchdog permanently disabled May 13 23:41:53.413786 kernel: NET: Registered PF_INET6 protocol family May 13 23:41:53.413793 kernel: Segment Routing with IPv6 May 13 23:41:53.413800 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:41:53.413807 kernel: NET: Registered PF_PACKET protocol family May 13 23:41:53.413814 kernel: Key type dns_resolver registered May 13 23:41:53.413821 kernel: registered taskstats version 1 May 13 23:41:53.413828 kernel: Loading compiled-in X.509 certificates May 13 23:41:53.413836 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:41:53.413843 kernel: Key type .fscrypt registered May 13 23:41:53.413850 kernel: Key type fscrypt-provisioning registered May 13 23:41:53.413857 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:41:53.413864 kernel: ima: Allocated hash algorithm: sha1 May 13 23:41:53.413871 kernel: ima: No architecture policies found May 13 23:41:53.413878 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:41:53.413885 kernel: clk: Disabling unused clocks May 13 23:41:53.413892 kernel: Freeing unused kernel memory: 38464K May 13 23:41:53.413901 kernel: Run /init as init process May 13 23:41:53.413908 kernel: with arguments: May 13 23:41:53.413915 kernel: /init May 13 23:41:53.413922 kernel: with environment: May 13 23:41:53.413929 kernel: HOME=/ May 13 23:41:53.413936 kernel: TERM=linux May 13 23:41:53.413943 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:41:53.413951 systemd[1]: Successfully made /usr/ read-only. May 13 23:41:53.413962 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:41:53.413970 systemd[1]: Detected virtualization microsoft. May 13 23:41:53.413978 systemd[1]: Detected architecture arm64. May 13 23:41:53.413985 systemd[1]: Running in initrd. May 13 23:41:53.413993 systemd[1]: No hostname configured, using default hostname. May 13 23:41:53.414001 systemd[1]: Hostname set to . May 13 23:41:53.414008 systemd[1]: Initializing machine ID from random generator. May 13 23:41:53.414016 systemd[1]: Queued start job for default target initrd.target. May 13 23:41:53.414026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:41:53.414033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:41:53.414042 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:41:53.414050 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:41:53.414058 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:41:53.414066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:41:53.414075 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:41:53.414085 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:41:53.414093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:41:53.414100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:41:53.414108 systemd[1]: Reached target paths.target - Path Units. May 13 23:41:53.414116 systemd[1]: Reached target slices.target - Slice Units. May 13 23:41:53.414124 systemd[1]: Reached target swap.target - Swaps. May 13 23:41:53.414132 systemd[1]: Reached target timers.target - Timer Units. May 13 23:41:53.414140 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:41:53.414149 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:41:53.414157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:41:53.414165 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:41:53.414173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:41:53.414181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:41:53.414189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:41:53.414196 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:41:53.414204 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:41:53.414212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:41:53.414222 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:41:53.414229 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:41:53.414237 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:41:53.414245 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:41:53.416312 systemd-journald[218]: Collecting audit messages is disabled. May 13 23:41:53.416354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:53.416364 systemd-journald[218]: Journal started May 13 23:41:53.416382 systemd-journald[218]: Runtime Journal (/run/log/journal/9613c5ac44c648b7a485d7d846316e54) is 8M, max 78.5M, 70.5M free. May 13 23:41:53.421418 systemd-modules-load[220]: Inserted module 'overlay' May 13 23:41:53.445872 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:41:53.464072 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:41:53.482853 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:41:53.482879 kernel: Bridge firewalling registered May 13 23:41:53.476596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:41:53.481920 systemd-modules-load[220]: Inserted module 'br_netfilter' May 13 23:41:53.490789 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:41:53.502505 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:41:53.515174 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:53.531408 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:41:53.548403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:41:53.573791 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:41:53.591679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:41:53.601070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:41:53.624245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:41:53.633325 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:41:53.650539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:41:53.669990 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:41:53.688395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:41:53.709906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:41:53.733328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:41:53.751241 dracut-cmdline[250]: dracut-dracut-053 May 13 23:41:53.756621 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:41:53.795712 systemd-resolved[251]: Positive Trust Anchors: May 13 23:41:53.795734 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:41:53.795765 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:41:53.797997 systemd-resolved[251]: Defaulting to hostname 'linux'. May 13 23:41:53.800676 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:41:53.818859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:41:53.939285 kernel: SCSI subsystem initialized May 13 23:41:53.947293 kernel: Loading iSCSI transport class v2.0-870. May 13 23:41:53.958306 kernel: iscsi: registered transport (tcp) May 13 23:41:53.977137 kernel: iscsi: registered transport (qla4xxx) May 13 23:41:53.977194 kernel: QLogic iSCSI HBA Driver May 13 23:41:54.015906 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:41:54.027394 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:41:54.081693 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:41:54.081757 kernel: device-mapper: uevent: version 1.0.3 May 13 23:41:54.088836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:41:54.140303 kernel: raid6: neonx8 gen() 15795 MB/s May 13 23:41:54.158278 kernel: raid6: neonx4 gen() 15815 MB/s May 13 23:41:54.178306 kernel: raid6: neonx2 gen() 13166 MB/s May 13 23:41:54.200283 kernel: raid6: neonx1 gen() 10510 MB/s May 13 23:41:54.220278 kernel: raid6: int64x8 gen() 6791 MB/s May 13 23:41:54.240279 kernel: raid6: int64x4 gen() 7359 MB/s May 13 23:41:54.261278 kernel: raid6: int64x2 gen() 6108 MB/s May 13 23:41:54.284965 kernel: raid6: int64x1 gen() 5059 MB/s May 13 23:41:54.284983 kernel: raid6: using algorithm neonx4 gen() 15815 MB/s May 13 23:41:54.308610 kernel: raid6: .... xor() 12391 MB/s, rmw enabled May 13 23:41:54.308624 kernel: raid6: using neon recovery algorithm May 13 23:41:54.321096 kernel: xor: measuring software checksum speed May 13 23:41:54.321115 kernel: 8regs : 21618 MB/sec May 13 23:41:54.324897 kernel: 32regs : 21624 MB/sec May 13 23:41:54.328855 kernel: arm64_neon : 27974 MB/sec May 13 23:41:54.333402 kernel: xor: using function: arm64_neon (27974 MB/sec) May 13 23:41:54.383300 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:41:54.393751 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:41:54.408137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:41:54.443606 systemd-udevd[436]: Using default interface naming scheme 'v255'. May 13 23:41:54.449320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:41:54.466444 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:41:54.500381 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation May 13 23:41:54.525778 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:41:54.533401 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:41:54.591304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:41:54.615940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:41:54.646415 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:41:54.653997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:41:54.670162 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:41:54.688468 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:41:54.715628 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:41:54.739415 kernel: hv_vmbus: Vmbus version:5.3 May 13 23:41:54.740009 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:41:54.759059 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:41:54.759088 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:41:54.746274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:41:54.782377 kernel: PTP clock support registered May 13 23:41:54.789326 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:41:54.823993 kernel: hv_utils: Registering HyperV Utility Driver May 13 23:41:54.824018 kernel: hv_vmbus: registering driver hv_utils May 13 23:41:54.824027 kernel: hv_utils: Heartbeat IC version 3.0 May 13 23:41:54.824057 kernel: hv_vmbus: registering driver hyperv_keyboard May 13 23:41:54.824067 kernel: hv_utils: Shutdown IC version 3.2 May 13 23:41:54.824076 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 13 23:41:54.849173 kernel: hv_vmbus: registering driver hv_netvsc May 13 23:41:54.849225 kernel: hv_vmbus: registering driver hid_hyperv May 13 23:41:54.849236 kernel: hv_utils: TimeSync IC version 4.0 May 13 23:41:54.848807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:41:54.754938 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 13 23:41:54.775699 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 13 23:41:54.775822 systemd-journald[218]: Time jumped backwards, rotating. May 13 23:41:54.775858 kernel: hv_vmbus: registering driver hv_storvsc May 13 23:41:54.848971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:54.731283 systemd-resolved[251]: Clock change detected. Flushing caches. May 13 23:41:54.834798 kernel: scsi host1: storvsc_host_t May 13 23:41:54.834952 kernel: scsi host0: storvsc_host_t May 13 23:41:54.835042 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 13 23:41:54.835063 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 13 23:41:54.835080 kernel: hv_netvsc 000d3af7-8140-000d-3af7-8140000d3af7 eth0: VF slot 1 added May 13 23:41:54.739626 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:54.783070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:54.882711 kernel: hv_vmbus: registering driver hv_pci May 13 23:41:54.791732 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:41:54.792185 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:41:54.914250 kernel: hv_pci 6f819876-2c99-4bef-a568-9ac6dd449504: PCI VMBus probing: Using version 0x10004 May 13 23:41:54.914533 kernel: hv_pci 6f819876-2c99-4bef-a568-9ac6dd449504: PCI host bridge to bus 2c99:00 May 13 23:41:54.929126 kernel: pci_bus 2c99:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 13 23:41:54.929268 kernel: pci_bus 2c99:00: No busn resource found for root bus, will use [bus 00-ff] May 13 23:41:54.835076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:54.978878 kernel: sr 1:0:0:2: [sr0] scsi-1 drive May 13 23:41:54.979066 kernel: pci 2c99:00:02.0: [15b3:1018] type 00 class 0x020000 May 13 23:41:54.979090 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:41:54.979106 kernel: pci 2c99:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 13 23:41:54.979120 kernel: pci 2c99:00:02.0: enabling Extended Tags May 13 23:41:54.847471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:41:54.847531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:54.869738 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:54.914813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:54.957408 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:41:55.037740 kernel: pci 2c99:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2c99:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 13 23:41:55.037804 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 13 23:41:55.054619 kernel: pci_bus 2c99:00: busn_res: [bus 00-ff] end is updated to 00 May 13 23:41:55.054795 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 May 13 23:41:55.054905 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks May 13 23:41:55.065766 kernel: pci 2c99:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 13 23:41:55.085457 kernel: sd 1:0:0:0: [sda] Write Protect is off May 13 23:41:55.085694 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 13 23:41:55.086042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:55.117600 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 13 23:41:55.133790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:41:55.168010 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:41:55.168029 kernel: sd 1:0:0:0: [sda] Attached SCSI disk May 13 23:41:55.199698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:41:55.231440 kernel: mlx5_core 2c99:00:02.0: enabling device (0000 -> 0002) May 13 23:41:55.240606 kernel: mlx5_core 2c99:00:02.0: firmware version: 16.30.1284 May 13 23:41:55.445236 kernel: hv_netvsc 000d3af7-8140-000d-3af7-8140000d3af7 eth0: VF registering: eth1 May 13 23:41:55.445438 kernel: mlx5_core 2c99:00:02.0 eth1: joined to eth0 May 13 23:41:55.453687 kernel: mlx5_core 2c99:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 13 23:41:55.466654 kernel: mlx5_core 2c99:00:02.0 enP11417s1: renamed from eth1 May 13 23:41:55.669239 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 13 23:41:55.702899 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (491) May 13 23:41:55.702920 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (494) May 13 23:41:55.709752 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 13 23:41:55.737485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 13 23:41:55.746399 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 13 23:41:55.778418 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:41:55.793737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:41:55.834615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:41:55.851613 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:41:56.859638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:41:56.860062 disk-uuid[607]: The operation has completed successfully. May 13 23:41:56.920674 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:41:56.920771 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:41:56.959183 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:41:56.985980 sh[693]: Success May 13 23:41:57.017646 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:41:57.188945 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:41:57.204719 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:41:57.222671 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:41:57.259065 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:41:57.259109 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:41:57.267282 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:41:57.272832 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:41:57.278564 kernel: BTRFS info (device dm-0): using free space tree May 13 23:41:57.566578 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:41:57.571781 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:41:57.574718 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:41:57.611281 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:41:57.653420 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:41:57.653466 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:41:57.658220 kernel: BTRFS info (device sda6): using free space tree May 13 23:41:57.678616 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:41:57.690651 kernel: BTRFS info (device sda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:41:57.694851 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:41:57.702729 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:41:57.758759 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:41:57.774248 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:41:57.817758 systemd-networkd[874]: lo: Link UP May 13 23:41:57.817765 systemd-networkd[874]: lo: Gained carrier May 13 23:41:57.823219 systemd-networkd[874]: Enumeration completed May 13 23:41:57.823376 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:41:57.830395 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:41:57.830399 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:41:57.830940 systemd[1]: Reached target network.target - Network. May 13 23:41:57.897611 kernel: mlx5_core 2c99:00:02.0 enP11417s1: Link up May 13 23:41:57.944631 kernel: hv_netvsc 000d3af7-8140-000d-3af7-8140000d3af7 eth0: Data path switched to VF: enP11417s1 May 13 23:41:57.945175 systemd-networkd[874]: enP11417s1: Link UP May 13 23:41:57.945395 systemd-networkd[874]: eth0: Link UP May 13 23:41:57.945804 systemd-networkd[874]: eth0: Gained carrier May 13 23:41:57.945813 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:41:57.955083 systemd-networkd[874]: enP11417s1: Gained carrier May 13 23:41:57.987626 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:41:58.428194 ignition[807]: Ignition 2.20.0 May 13 23:41:58.428205 ignition[807]: Stage: fetch-offline May 13 23:41:58.430353 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:41:58.428237 ignition[807]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.440707 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:41:58.428245 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.428330 ignition[807]: parsed url from cmdline: "" May 13 23:41:58.428333 ignition[807]: no config URL provided May 13 23:41:58.428338 ignition[807]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:41:58.428345 ignition[807]: no config at "/usr/lib/ignition/user.ign" May 13 23:41:58.428350 ignition[807]: failed to fetch config: resource requires networking May 13 23:41:58.428518 ignition[807]: Ignition finished successfully May 13 23:41:58.500254 ignition[883]: Ignition 2.20.0 May 13 23:41:58.500266 ignition[883]: Stage: fetch May 13 23:41:58.500422 ignition[883]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.500431 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.500519 ignition[883]: parsed url from cmdline: "" May 13 23:41:58.500522 ignition[883]: no config URL provided May 13 23:41:58.500526 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:41:58.500533 ignition[883]: no config at "/usr/lib/ignition/user.ign" May 13 23:41:58.500558 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 13 23:41:58.593117 ignition[883]: GET result: OK May 13 23:41:58.593198 ignition[883]: config has been read from IMDS userdata May 13 23:41:58.593238 ignition[883]: parsing config with SHA512: ddf83d74e5724ac0d6e7c4c0e51ce0dff6cec89679b1e89903329bc16cac97dae37f459c2fbea6e99d3948232564deaf424adf6cba6bf2e140facca3699cabd4 May 13 23:41:58.597728 unknown[883]: fetched base config from "system" May 13 23:41:58.598102 ignition[883]: fetch: fetch complete May 13 23:41:58.597736 unknown[883]: fetched base config from "system" May 13 23:41:58.598107 ignition[883]: fetch: fetch passed May 13 23:41:58.597741 unknown[883]: fetched user config from "azure" May 13 23:41:58.598152 ignition[883]: Ignition finished successfully May 13 23:41:58.605118 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:41:58.620733 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:41:58.672405 ignition[889]: Ignition 2.20.0 May 13 23:41:58.675986 ignition[889]: Stage: kargs May 13 23:41:58.678831 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:41:58.676213 ignition[889]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.690739 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:41:58.676223 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.677212 ignition[889]: kargs: kargs passed May 13 23:41:58.677263 ignition[889]: Ignition finished successfully May 13 23:41:58.731631 ignition[895]: Ignition 2.20.0 May 13 23:41:58.731641 ignition[895]: Stage: disks May 13 23:41:58.734981 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:41:58.731835 ignition[895]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.748540 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:41:58.731847 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.763349 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:41:58.732794 ignition[895]: disks: disks passed May 13 23:41:58.778361 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:41:58.732845 ignition[895]: Ignition finished successfully May 13 23:41:58.793092 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:41:58.810827 systemd[1]: Reached target basic.target - Basic System. May 13 23:41:58.827728 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:41:58.909278 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 13 23:41:58.913977 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:41:58.931693 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:41:59.001615 kernel: EXT4-fs (sda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:41:59.001793 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:41:59.008172 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:41:59.043036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:41:59.065549 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:41:59.090611 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) May 13 23:41:59.108225 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:41:59.108279 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:41:59.114629 kernel: BTRFS info (device sda6): using free space tree May 13 23:41:59.113759 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:41:59.124880 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:41:59.124924 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:41:59.161147 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:41:59.191899 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:41:59.186005 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:41:59.203753 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:41:59.253738 systemd-networkd[874]: eth0: Gained IPv6LL May 13 23:41:59.381911 systemd-networkd[874]: enP11417s1: Gained IPv6LL May 13 23:41:59.573220 coreos-metadata[916]: May 13 23:41:59.573 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:41:59.585253 coreos-metadata[916]: May 13 23:41:59.585 INFO Fetch successful May 13 23:41:59.592470 coreos-metadata[916]: May 13 23:41:59.589 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 13 23:41:59.607337 coreos-metadata[916]: May 13 23:41:59.606 INFO Fetch successful May 13 23:41:59.618978 coreos-metadata[916]: May 13 23:41:59.618 INFO wrote hostname ci-4284.0.0-n-5e434aba7d to /sysroot/etc/hostname May 13 23:41:59.630667 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:41:59.783913 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:41:59.817961 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory May 13 23:41:59.829022 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:41:59.838985 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:42:00.440870 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:42:00.453700 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:42:00.481342 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:42:00.508472 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:42:00.515341 kernel: BTRFS info (device sda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:00.530009 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:42:00.547133 ignition[1034]: INFO : Ignition 2.20.0 May 13 23:42:00.547133 ignition[1034]: INFO : Stage: mount May 13 23:42:00.557629 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:00.557629 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:00.557629 ignition[1034]: INFO : mount: mount passed May 13 23:42:00.557629 ignition[1034]: INFO : Ignition finished successfully May 13 23:42:00.553401 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:42:00.566703 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:42:00.608824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:42:00.645799 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1044) May 13 23:42:00.668114 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:00.668173 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:00.673444 kernel: BTRFS info (device sda6): using free space tree May 13 23:42:00.681624 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:42:00.682576 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:42:00.716365 ignition[1061]: INFO : Ignition 2.20.0 May 13 23:42:00.716365 ignition[1061]: INFO : Stage: files May 13 23:42:00.726643 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:00.726643 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:00.726643 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping May 13 23:42:00.751261 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:42:00.751261 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:42:00.833628 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:42:00.843170 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:42:00.843170 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:42:00.834109 unknown[1061]: wrote ssh authorized keys file for user: core May 13 23:42:00.869652 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:42:00.882239 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 23:42:00.922427 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:42:01.274247 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:42:01.287626 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 23:42:01.662864 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:42:01.901279 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:42:01.901279 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:42:01.922847 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:01.922847 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:01.922847 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:42:01.922847 ignition[1061]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:42:01.922847 ignition[1061]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:42:01.922847 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:01.922847 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:01.922847 ignition[1061]: INFO : files: files passed May 13 23:42:01.922847 ignition[1061]: INFO : Ignition finished successfully May 13 23:42:01.916952 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:42:01.929757 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:42:01.980949 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:42:02.021665 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:42:02.058312 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:02.058312 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:02.021767 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:42:02.103902 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:02.054411 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:02.066030 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:42:02.083736 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:42:02.153680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:42:02.153825 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:42:02.169099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:42:02.183713 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:42:02.197207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:42:02.199740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:42:02.247300 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:02.259747 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:42:02.292095 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:02.300693 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:02.318203 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:42:02.332505 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:42:02.332666 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:02.353274 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:42:02.359789 systemd[1]: Stopped target basic.target - Basic System. May 13 23:42:02.371423 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:42:02.384915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:42:02.398054 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:42:02.412444 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:42:02.425748 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:42:02.439581 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:42:02.451697 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:42:02.464349 systemd[1]: Stopped target swap.target - Swaps. May 13 23:42:02.476707 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:42:02.476832 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:42:02.493793 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:02.500659 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:02.512791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:42:02.518039 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:02.525222 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:42:02.525340 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:42:02.544653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:42:02.544801 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:02.552990 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:42:02.553091 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:42:02.566675 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:42:02.662499 ignition[1114]: INFO : Ignition 2.20.0 May 13 23:42:02.662499 ignition[1114]: INFO : Stage: umount May 13 23:42:02.662499 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:02.662499 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:02.662499 ignition[1114]: INFO : umount: umount passed May 13 23:42:02.662499 ignition[1114]: INFO : Ignition finished successfully May 13 23:42:02.566776 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:42:02.592785 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:42:02.630173 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:42:02.647829 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:42:02.654712 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:02.663205 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:42:02.663355 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:42:02.682780 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:42:02.683529 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:42:02.683632 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:42:02.699386 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:42:02.699490 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:42:02.715207 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:42:02.715270 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:42:02.728628 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:42:02.728689 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:42:02.741743 systemd[1]: Stopped target network.target - Network. May 13 23:42:02.756524 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:42:02.756599 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:42:02.772069 systemd[1]: Stopped target paths.target - Path Units. May 13 23:42:02.785196 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:42:02.791712 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:02.801650 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:42:02.816246 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:42:02.829965 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:42:02.830010 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:42:02.844389 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:42:02.844430 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:42:02.858618 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:42:02.858675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:42:02.872970 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:42:02.873013 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:42:02.887140 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:42:02.900398 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:42:02.923730 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:42:02.923815 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:42:02.940224 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:42:03.252793 kernel: hv_netvsc 000d3af7-8140-000d-3af7-8140000d3af7 eth0: Data path switched from VF: enP11417s1 May 13 23:42:02.940330 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:42:02.962053 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:42:02.962287 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:42:02.962490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:42:02.984489 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:42:02.984771 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:42:02.984857 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:42:03.001479 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:42:03.001545 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:03.012709 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:42:03.012780 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:42:03.030719 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:42:03.054563 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:42:03.054818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:42:03.069816 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:42:03.069871 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:03.090031 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:42:03.090081 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:42:03.098186 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:42:03.098234 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:03.112195 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:03.127539 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:42:03.127635 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:03.147867 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:42:03.148023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:03.166027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:42:03.166118 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:42:03.181581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:42:03.181637 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:03.197965 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:42:03.198029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:42:03.570088 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 13 23:42:03.218891 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:42:03.218942 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:42:03.246083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:42:03.246149 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:03.278784 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:42:03.299746 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:42:03.299837 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:03.317166 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:42:03.317219 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:03.326660 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:42:03.326709 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:03.344723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:03.344786 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:03.365121 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:42:03.365186 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:03.365522 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:42:03.365650 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:42:03.394949 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:42:03.395180 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:42:03.408949 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:42:03.423786 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:42:03.469837 systemd[1]: Switching root. May 13 23:42:03.723488 systemd-journald[218]: Journal stopped May 13 23:42:08.215804 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:42:08.215835 kernel: SELinux: policy capability open_perms=1 May 13 23:42:08.215848 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:42:08.215856 kernel: SELinux: policy capability always_check_network=0 May 13 23:42:08.215868 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:42:08.215876 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:42:08.215885 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:42:08.215893 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:42:08.215901 kernel: audit: type=1403 audit(1747179724.806:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:42:08.215911 systemd[1]: Successfully loaded SELinux policy in 128.182ms. May 13 23:42:08.215923 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.580ms. May 13 23:42:08.215933 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:42:08.215942 systemd[1]: Detected virtualization microsoft. May 13 23:42:08.215951 systemd[1]: Detected architecture arm64. May 13 23:42:08.215960 systemd[1]: Detected first boot. May 13 23:42:08.215972 systemd[1]: Hostname set to . May 13 23:42:08.215981 systemd[1]: Initializing machine ID from random generator. May 13 23:42:08.215990 zram_generator::config[1158]: No configuration found. May 13 23:42:08.216000 kernel: NET: Registered PF_VSOCK protocol family May 13 23:42:08.216008 systemd[1]: Populated /etc with preset unit settings. May 13 23:42:08.216018 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:42:08.216028 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:42:08.216038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:42:08.216048 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:42:08.216057 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:42:08.216067 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:42:08.216076 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:42:08.216086 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:42:08.216095 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:42:08.216106 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:42:08.216116 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:42:08.216125 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:42:08.216134 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:08.216143 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:08.216153 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:42:08.216162 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:42:08.216171 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:42:08.216182 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:42:08.216192 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:42:08.216201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:08.216213 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:42:08.216222 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:42:08.216232 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:42:08.216242 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:42:08.216252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:08.216263 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:42:08.216272 systemd[1]: Reached target slices.target - Slice Units. May 13 23:42:08.216281 systemd[1]: Reached target swap.target - Swaps. May 13 23:42:08.216291 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:42:08.216301 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:42:08.216310 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:42:08.216322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:08.216331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:42:08.216341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:08.216350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:42:08.216360 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:42:08.216369 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:42:08.216379 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:42:08.216390 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:42:08.216403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:42:08.216416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:42:08.216426 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:42:08.216436 systemd[1]: Reached target machines.target - Containers. May 13 23:42:08.216445 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:42:08.216455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:08.216466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:42:08.216477 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:42:08.216487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:08.216497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:08.216506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:08.216516 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:42:08.216527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:08.216536 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:42:08.216546 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:42:08.216557 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:42:08.216567 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:42:08.216576 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:42:08.216586 kernel: loop: module loaded May 13 23:42:08.216606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:08.216615 kernel: fuse: init (API version 7.39) May 13 23:42:08.216624 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:42:08.216634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:42:08.216643 kernel: ACPI: bus type drm_connector registered May 13 23:42:08.216654 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:42:08.216687 systemd-journald[1262]: Collecting audit messages is disabled. May 13 23:42:08.216707 systemd-journald[1262]: Journal started May 13 23:42:08.216731 systemd-journald[1262]: Runtime Journal (/run/log/journal/e1acc22edb4548cd823eedeab14df26a) is 8M, max 78.5M, 70.5M free. May 13 23:42:07.152543 systemd[1]: Queued start job for default target multi-user.target. May 13 23:42:07.157611 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 23:42:07.157975 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:42:07.158273 systemd[1]: systemd-journald.service: Consumed 3.932s CPU time. May 13 23:42:08.245997 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:42:08.269654 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:42:08.290542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:42:08.303686 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:42:08.303745 systemd[1]: Stopped verity-setup.service. May 13 23:42:08.323033 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:42:08.323945 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:42:08.330501 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:42:08.337947 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:42:08.344056 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:42:08.350778 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:42:08.357928 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:42:08.364080 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:42:08.372208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:08.380415 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:42:08.380580 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:42:08.388743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:08.388905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:08.396700 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:08.396856 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:08.403748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:08.403909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:08.412544 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:42:08.412718 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:42:08.420205 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:08.420369 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:08.428324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:42:08.436667 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:42:08.445022 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:42:08.453210 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:42:08.460524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:08.479278 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:42:08.487407 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:42:08.506025 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:42:08.513366 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:42:08.513570 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:42:08.520808 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:42:08.530527 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:42:08.548624 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:42:08.555557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:08.557050 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:42:08.565793 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:42:08.574250 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:08.575956 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:42:08.583352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:08.587473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:42:08.599768 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:42:08.616879 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:42:08.628766 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:42:08.640990 systemd-journald[1262]: Time spent on flushing to /var/log/journal/e1acc22edb4548cd823eedeab14df26a is 78.394ms for 917 entries. May 13 23:42:08.640990 systemd-journald[1262]: System Journal (/var/log/journal/e1acc22edb4548cd823eedeab14df26a) is 11.8M, max 2.6G, 2.6G free. May 13 23:42:08.857848 systemd-journald[1262]: Received client request to flush runtime journal. May 13 23:42:08.857913 kernel: loop0: detected capacity change from 0 to 201592 May 13 23:42:08.857934 systemd-journald[1262]: /var/log/journal/e1acc22edb4548cd823eedeab14df26a/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 13 23:42:08.857964 systemd-journald[1262]: Rotating system journal. May 13 23:42:08.857985 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:42:08.858001 kernel: loop1: detected capacity change from 0 to 103832 May 13 23:42:08.658199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:42:08.666056 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:42:08.674966 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:42:08.683428 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:42:08.712378 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:42:08.725665 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:42:08.736268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:08.745257 udevadm[1301]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:42:08.781319 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. May 13 23:42:08.781330 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. May 13 23:42:08.786899 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:08.803762 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:42:08.861630 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:42:08.872458 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:42:08.875732 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:42:09.079275 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:42:09.087711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:42:09.113887 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 13 23:42:09.114177 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 13 23:42:09.117997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:09.210142 kernel: loop2: detected capacity change from 0 to 28888 May 13 23:42:09.507618 kernel: loop3: detected capacity change from 0 to 126448 May 13 23:42:09.640082 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:42:09.650772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:09.685886 systemd-udevd[1328]: Using default interface naming scheme 'v255'. May 13 23:42:09.777289 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:09.798687 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:42:09.854232 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:42:09.869882 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:42:09.925158 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:42:09.951759 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:42:10.021955 kernel: hv_vmbus: registering driver hv_balloon May 13 23:42:10.022051 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 13 23:42:10.022070 kernel: hv_vmbus: registering driver hyperv_fb May 13 23:42:10.022087 kernel: hv_balloon: Memory hot add disabled on ARM64 May 13 23:42:10.027094 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 13 23:42:10.035493 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 13 23:42:10.048447 kernel: Console: switching to colour dummy device 80x25 May 13 23:42:10.048526 kernel: loop4: detected capacity change from 0 to 201592 May 13 23:42:10.055455 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:42:10.055985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:10.082142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:10.082350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:10.101291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:10.110151 systemd-networkd[1346]: lo: Link UP May 13 23:42:10.113836 kernel: loop5: detected capacity change from 0 to 103832 May 13 23:42:10.110158 systemd-networkd[1346]: lo: Gained carrier May 13 23:42:10.112337 systemd-networkd[1346]: Enumeration completed May 13 23:42:10.114080 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:42:10.114320 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:10.114324 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:10.134052 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:42:10.157182 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:42:10.159638 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1334) May 13 23:42:10.159707 kernel: loop6: detected capacity change from 0 to 28888 May 13 23:42:10.174048 kernel: mlx5_core 2c99:00:02.0 enP11417s1: Link up May 13 23:42:10.177144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:10.178677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:10.190446 kernel: loop7: detected capacity change from 0 to 126448 May 13 23:42:10.191905 (sd-merge)[1379]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 13 23:42:10.192379 (sd-merge)[1379]: Merged extensions into '/usr'. May 13 23:42:10.202478 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:10.238623 kernel: hv_netvsc 000d3af7-8140-000d-3af7-8140000d3af7 eth0: Data path switched to VF: enP11417s1 May 13 23:42:10.245928 systemd-networkd[1346]: enP11417s1: Link UP May 13 23:42:10.248473 systemd-networkd[1346]: eth0: Link UP May 13 23:42:10.249458 systemd-networkd[1346]: eth0: Gained carrier May 13 23:42:10.249531 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:10.254627 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:42:10.264842 systemd[1]: Reload requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:42:10.264946 systemd[1]: Reloading... May 13 23:42:10.266110 systemd-networkd[1346]: enP11417s1: Gained carrier May 13 23:42:10.276171 systemd-networkd[1346]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:42:10.351810 zram_generator::config[1474]: No configuration found. May 13 23:42:10.476285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:10.575159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:42:10.582728 systemd[1]: Reloading finished in 317 ms. May 13 23:42:10.600789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:42:10.609982 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:42:10.647956 systemd[1]: Starting ensure-sysext.service... May 13 23:42:10.657759 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:42:10.677299 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:42:10.688013 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:42:10.697970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:10.716320 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:42:10.716529 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:42:10.717150 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:42:10.717345 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. May 13 23:42:10.717387 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. May 13 23:42:10.728025 lvm[1533]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:10.732374 systemd[1]: Reload requested from client PID 1532 ('systemctl') (unit ensure-sysext.service)... May 13 23:42:10.732384 systemd[1]: Reloading... May 13 23:42:10.736227 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:10.736236 systemd-tmpfiles[1535]: Skipping /boot May 13 23:42:10.749911 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:10.749927 systemd-tmpfiles[1535]: Skipping /boot May 13 23:42:10.813900 zram_generator::config[1570]: No configuration found. May 13 23:42:10.921445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:11.025665 systemd[1]: Reloading finished in 292 ms. May 13 23:42:11.037155 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:42:11.057784 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:42:11.066863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:11.075272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:11.088730 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:11.096779 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:11.109847 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:42:11.119049 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:42:11.127878 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:42:11.130395 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:11.148633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:42:11.157839 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:42:11.170835 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:42:11.185196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:11.194724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:11.214046 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:11.226904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:11.234414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:11.234542 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:11.239922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:11.240097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:11.248147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:11.248306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:11.256018 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:11.256170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:11.272905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:11.275227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:11.288873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:11.298953 augenrules[1670]: No rules May 13 23:42:11.300892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:11.308117 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:11.308247 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:11.309471 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:11.309713 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:11.316824 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:42:11.327037 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:42:11.337050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:11.337326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:11.344289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:11.344447 systemd-resolved[1645]: Positive Trust Anchors: May 13 23:42:11.344467 systemd-resolved[1645]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:42:11.344498 systemd-resolved[1645]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:42:11.344965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:11.355829 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:11.356013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:11.365653 systemd-resolved[1645]: Using system hostname 'ci-4284.0.0-n-5e434aba7d'. May 13 23:42:11.367228 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:42:11.383640 systemd[1]: Finished ensure-sysext.service. May 13 23:42:11.390154 systemd[1]: Reached target network.target - Network. May 13 23:42:11.395796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:11.404666 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:11.411054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:11.417724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:11.431664 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:11.439889 augenrules[1683]: /sbin/augenrules: No change May 13 23:42:11.446176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:11.451034 augenrules[1702]: No rules May 13 23:42:11.457807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:11.464624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:11.464671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:11.464715 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:42:11.472980 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:11.474373 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:11.480998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:11.481163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:11.490210 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:11.490374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:11.497797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:11.497954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:11.506352 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:11.506497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:11.516793 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:11.516871 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:11.909142 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:42:11.918005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:42:12.181834 systemd-networkd[1346]: enP11417s1: Gained IPv6LL May 13 23:42:12.245745 systemd-networkd[1346]: eth0: Gained IPv6LL May 13 23:42:12.248265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:42:12.256084 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:42:13.736469 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:42:13.756441 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:42:13.764525 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:42:13.781067 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:42:13.789170 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:42:13.795646 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:42:13.803083 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:42:13.811054 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:42:13.818102 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:42:13.826339 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:42:13.834437 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:42:13.834472 systemd[1]: Reached target paths.target - Path Units. May 13 23:42:13.840044 systemd[1]: Reached target timers.target - Timer Units. May 13 23:42:13.859215 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:42:13.867447 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:42:13.875522 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:42:13.882970 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:42:13.890296 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:42:13.898764 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:42:13.905637 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:42:13.914144 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:42:13.921753 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:42:13.927956 systemd[1]: Reached target basic.target - Basic System. May 13 23:42:13.933772 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:13.933799 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:13.936052 systemd[1]: Starting chronyd.service - NTP client/server... May 13 23:42:13.952704 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:42:13.962738 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:42:13.974288 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:42:13.984751 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:42:13.997421 (chronyd)[1721]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 13 23:42:13.998685 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:42:14.005653 jq[1728]: false May 13 23:42:14.006032 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:42:14.006067 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 13 23:42:14.007337 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 13 23:42:14.014420 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 13 23:42:14.015556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:14.024779 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:42:14.032410 KVP[1730]: KVP starting; pid is:1730 May 13 23:42:14.035729 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:42:14.044720 chronyd[1737]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 13 23:42:14.055183 extend-filesystems[1729]: Found loop4 May 13 23:42:14.055183 extend-filesystems[1729]: Found loop5 May 13 23:42:14.055183 extend-filesystems[1729]: Found loop6 May 13 23:42:14.055183 extend-filesystems[1729]: Found loop7 May 13 23:42:14.055183 extend-filesystems[1729]: Found sda May 13 23:42:14.055183 extend-filesystems[1729]: Found sda1 May 13 23:42:14.055183 extend-filesystems[1729]: Found sda2 May 13 23:42:14.055183 extend-filesystems[1729]: Found sda3 May 13 23:42:14.055183 extend-filesystems[1729]: Found usr May 13 23:42:14.055183 extend-filesystems[1729]: Found sda4 May 13 23:42:14.055183 extend-filesystems[1729]: Found sda6 May 13 23:42:14.055183 extend-filesystems[1729]: Found sda7 May 13 23:42:14.055183 extend-filesystems[1729]: Found sda9 May 13 23:42:14.055183 extend-filesystems[1729]: Checking size of /dev/sda9 May 13 23:42:14.301253 kernel: hv_utils: KVP IC version 4.0 May 13 23:42:14.301280 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1775) May 13 23:42:14.053407 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:42:14.301503 extend-filesystems[1729]: Old size kept for /dev/sda9 May 13 23:42:14.301503 extend-filesystems[1729]: Found sr0 May 13 23:42:14.061423 KVP[1730]: KVP LIC Version: 3.1 May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.216 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.225 INFO Fetch successful May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.225 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.238 INFO Fetch successful May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.239 INFO Fetching http://168.63.129.16/machine/95df2c74-6ba7-4bee-995a-480222369e46/c017e3fb%2D9f0f%2D457d%2Da1b0%2D682756635df6.%5Fci%2D4284.0.0%2Dn%2D5e434aba7d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.249 INFO Fetch successful May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.249 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 13 23:42:14.348310 coreos-metadata[1723]: May 13 23:42:14.262 INFO Fetch successful May 13 23:42:14.067729 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:42:14.075702 chronyd[1737]: Timezone right/UTC failed leap second check, ignoring May 13 23:42:14.090763 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:42:14.075958 chronyd[1737]: Loaded seccomp filter (level 2) May 13 23:42:14.119018 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:42:14.132658 dbus-daemon[1724]: [system] SELinux support is enabled May 13 23:42:14.141621 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:42:14.142132 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:42:14.355524 update_engine[1760]: I20250513 23:42:14.206988 1760 main.cc:92] Flatcar Update Engine starting May 13 23:42:14.355524 update_engine[1760]: I20250513 23:42:14.212003 1760 update_check_scheduler.cc:74] Next update check in 4m54s May 13 23:42:14.142727 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:42:14.364251 jq[1765]: true May 13 23:42:14.163199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:42:14.181729 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:42:14.207121 systemd[1]: Started chronyd.service - NTP client/server. May 13 23:42:14.234267 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:42:14.236687 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:42:14.236987 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:42:14.237158 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:42:14.271767 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:42:14.271956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:42:14.309068 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:42:14.330928 systemd-logind[1754]: New seat seat0. May 13 23:42:14.338004 systemd-logind[1754]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:42:14.339302 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:42:14.354951 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:42:14.355701 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:42:14.399251 dbus-daemon[1724]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:42:14.390301 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:42:14.400343 jq[1804]: true May 13 23:42:14.390335 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:42:14.391902 (ntainerd)[1805]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:42:14.402767 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:42:14.402786 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:42:14.415817 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:42:14.429378 systemd[1]: Started update-engine.service - Update Engine. May 13 23:42:14.429493 tar[1793]: linux-arm64/LICENSE May 13 23:42:14.429711 tar[1793]: linux-arm64/helm May 13 23:42:14.444289 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:42:14.461187 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:42:14.590417 bash[1862]: Updated "/home/core/.ssh/authorized_keys" May 13 23:42:14.592482 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:42:14.610903 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:42:14.761100 locksmithd[1837]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:42:15.003642 containerd[1805]: time="2025-05-13T23:42:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:42:15.009620 containerd[1805]: time="2025-05-13T23:42:15.009302360Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.044707800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.08µs" May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.044750640Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.044770840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.044935200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.044952240Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.044977920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.045031320Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.045042400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.045270240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.045284800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.045296480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:15.047235 containerd[1805]: time="2025-05-13T23:42:15.045305080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:42:15.047617 containerd[1805]: time="2025-05-13T23:42:15.045368720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:42:15.047617 containerd[1805]: time="2025-05-13T23:42:15.045550760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:15.047617 containerd[1805]: time="2025-05-13T23:42:15.045578280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:15.055260 containerd[1805]: time="2025-05-13T23:42:15.055106960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:42:15.056815 containerd[1805]: time="2025-05-13T23:42:15.056772840Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:42:15.057077 containerd[1805]: time="2025-05-13T23:42:15.057052120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:42:15.057183 containerd[1805]: time="2025-05-13T23:42:15.057163120Z" level=info msg="metadata content store policy set" policy=shared May 13 23:42:15.084181 containerd[1805]: time="2025-05-13T23:42:15.084124600Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:42:15.084307 containerd[1805]: time="2025-05-13T23:42:15.084213240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:42:15.084307 containerd[1805]: time="2025-05-13T23:42:15.084231600Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:42:15.084307 containerd[1805]: time="2025-05-13T23:42:15.084245480Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:42:15.084307 containerd[1805]: time="2025-05-13T23:42:15.084259840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:42:15.084307 containerd[1805]: time="2025-05-13T23:42:15.084275240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:42:15.084307 containerd[1805]: time="2025-05-13T23:42:15.084288280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:42:15.084307 containerd[1805]: time="2025-05-13T23:42:15.084301760Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:42:15.084428 containerd[1805]: time="2025-05-13T23:42:15.084314360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:42:15.084428 containerd[1805]: time="2025-05-13T23:42:15.084325160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:42:15.084428 containerd[1805]: time="2025-05-13T23:42:15.084336760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:42:15.084428 containerd[1805]: time="2025-05-13T23:42:15.084348720Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:42:15.084514 containerd[1805]: time="2025-05-13T23:42:15.084489440Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:42:15.084542 containerd[1805]: time="2025-05-13T23:42:15.084518640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:42:15.084542 containerd[1805]: time="2025-05-13T23:42:15.084533040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:42:15.084582 containerd[1805]: time="2025-05-13T23:42:15.084544400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:42:15.084582 containerd[1805]: time="2025-05-13T23:42:15.084555600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:42:15.084582 containerd[1805]: time="2025-05-13T23:42:15.084565840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:42:15.084582 containerd[1805]: time="2025-05-13T23:42:15.084577000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:42:15.084676 containerd[1805]: time="2025-05-13T23:42:15.084599760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:42:15.084676 containerd[1805]: time="2025-05-13T23:42:15.084619680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:42:15.084676 containerd[1805]: time="2025-05-13T23:42:15.084632160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:42:15.084676 containerd[1805]: time="2025-05-13T23:42:15.084646240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:42:15.084743 containerd[1805]: time="2025-05-13T23:42:15.084725080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:42:15.084743 containerd[1805]: time="2025-05-13T23:42:15.084740120Z" level=info msg="Start snapshots syncer" May 13 23:42:15.084775 containerd[1805]: time="2025-05-13T23:42:15.084763880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:42:15.085278 containerd[1805]: time="2025-05-13T23:42:15.084980200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:42:15.085278 containerd[1805]: time="2025-05-13T23:42:15.085038360Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085108480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085210280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085232080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085242840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085253480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085285160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085303560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085315680Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085339920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085352560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085361680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085392760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085408280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:15.085494 containerd[1805]: time="2025-05-13T23:42:15.085416800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085425880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085438360Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085447960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085458120Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085477280Z" level=info msg="runtime interface created" May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085482600Z" level=info msg="created NRI interface" May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085493520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085504840Z" level=info msg="Connect containerd service" May 13 23:42:15.086415 containerd[1805]: time="2025-05-13T23:42:15.085530600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:42:15.091443 containerd[1805]: time="2025-05-13T23:42:15.090913280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:42:15.195791 tar[1793]: linux-arm64/README.md May 13 23:42:15.216084 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:42:15.373837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:15.394975 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:15.776969 containerd[1805]: time="2025-05-13T23:42:15.776863200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:42:15.776969 containerd[1805]: time="2025-05-13T23:42:15.776935160Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:42:15.777114 containerd[1805]: time="2025-05-13T23:42:15.777076640Z" level=info msg="Start subscribing containerd event" May 13 23:42:15.777139 containerd[1805]: time="2025-05-13T23:42:15.777127960Z" level=info msg="Start recovering state" May 13 23:42:15.777596 containerd[1805]: time="2025-05-13T23:42:15.777215200Z" level=info msg="Start event monitor" May 13 23:42:15.777596 containerd[1805]: time="2025-05-13T23:42:15.777238680Z" level=info msg="Start cni network conf syncer for default" May 13 23:42:15.777596 containerd[1805]: time="2025-05-13T23:42:15.777247080Z" level=info msg="Start streaming server" May 13 23:42:15.777596 containerd[1805]: time="2025-05-13T23:42:15.777259120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:42:15.777596 containerd[1805]: time="2025-05-13T23:42:15.777266800Z" level=info msg="runtime interface starting up..." May 13 23:42:15.777596 containerd[1805]: time="2025-05-13T23:42:15.777272920Z" level=info msg="starting plugins..." May 13 23:42:15.777596 containerd[1805]: time="2025-05-13T23:42:15.777286160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:42:15.777670 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:42:15.787346 containerd[1805]: time="2025-05-13T23:42:15.787314760Z" level=info msg="containerd successfully booted in 0.785371s" May 13 23:42:15.841613 kubelet[1895]: E0513 23:42:15.841534 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:15.843082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:15.843201 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:15.843650 systemd[1]: kubelet.service: Consumed 699ms CPU time, 247.1M memory peak. May 13 23:42:16.148152 sshd_keygen[1762]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:42:16.166785 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:42:16.176278 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:42:16.190798 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 13 23:42:16.202982 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:42:16.204608 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:42:16.215840 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:42:16.232757 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 13 23:42:16.250891 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:42:16.261778 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:42:16.270791 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:42:16.282549 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:42:16.288672 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:42:16.295573 systemd[1]: Startup finished in 726ms (kernel) + 12.026s (initrd) + 11.616s (userspace) = 24.368s. May 13 23:42:16.510308 login[1933]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying May 13 23:42:16.511045 login[1932]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:16.520943 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:42:16.522195 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:42:16.524802 systemd-logind[1754]: New session 2 of user core. May 13 23:42:16.541253 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:42:16.544155 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:42:16.564359 (systemd)[1940]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:42:16.566747 systemd-logind[1754]: New session c1 of user core. May 13 23:42:16.721749 systemd[1940]: Queued start job for default target default.target. May 13 23:42:16.727585 systemd[1940]: Created slice app.slice - User Application Slice. May 13 23:42:16.727770 systemd[1940]: Reached target paths.target - Paths. May 13 23:42:16.727876 systemd[1940]: Reached target timers.target - Timers. May 13 23:42:16.729290 systemd[1940]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:42:16.738506 systemd[1940]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:42:16.738573 systemd[1940]: Reached target sockets.target - Sockets. May 13 23:42:16.738640 systemd[1940]: Reached target basic.target - Basic System. May 13 23:42:16.738670 systemd[1940]: Reached target default.target - Main User Target. May 13 23:42:16.738695 systemd[1940]: Startup finished in 165ms. May 13 23:42:16.738828 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:42:16.745753 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:42:17.515495 login[1933]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:17.520343 systemd-logind[1754]: New session 1 of user core. May 13 23:42:17.527719 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:42:17.582637 waagent[1930]: 2025-05-13T23:42:17.582542Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 13 23:42:17.588367 waagent[1930]: 2025-05-13T23:42:17.588312Z INFO Daemon Daemon OS: flatcar 4284.0.0 May 13 23:42:17.593336 waagent[1930]: 2025-05-13T23:42:17.593293Z INFO Daemon Daemon Python: 3.11.11 May 13 23:42:17.600559 waagent[1930]: 2025-05-13T23:42:17.598460Z INFO Daemon Daemon Run daemon May 13 23:42:17.602970 waagent[1930]: 2025-05-13T23:42:17.602809Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' May 13 23:42:17.612163 waagent[1930]: 2025-05-13T23:42:17.612107Z INFO Daemon Daemon Using waagent for provisioning May 13 23:42:17.617475 waagent[1930]: 2025-05-13T23:42:17.617431Z INFO Daemon Daemon Activate resource disk May 13 23:42:17.623484 waagent[1930]: 2025-05-13T23:42:17.623436Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 13 23:42:17.635874 waagent[1930]: 2025-05-13T23:42:17.635817Z INFO Daemon Daemon Found device: None May 13 23:42:17.640999 waagent[1930]: 2025-05-13T23:42:17.640953Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 13 23:42:17.649468 waagent[1930]: 2025-05-13T23:42:17.649419Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 13 23:42:17.662668 waagent[1930]: 2025-05-13T23:42:17.662621Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:42:17.668894 waagent[1930]: 2025-05-13T23:42:17.668853Z INFO Daemon Daemon Running default provisioning handler May 13 23:42:17.680878 waagent[1930]: 2025-05-13T23:42:17.680789Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 13 23:42:17.695120 waagent[1930]: 2025-05-13T23:42:17.695069Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 13 23:42:17.704675 waagent[1930]: 2025-05-13T23:42:17.704630Z INFO Daemon Daemon cloud-init is enabled: False May 13 23:42:17.710442 waagent[1930]: 2025-05-13T23:42:17.710399Z INFO Daemon Daemon Copying ovf-env.xml May 13 23:42:17.798188 waagent[1930]: 2025-05-13T23:42:17.797441Z INFO Daemon Daemon Successfully mounted dvd May 13 23:42:17.826262 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 13 23:42:17.829673 waagent[1930]: 2025-05-13T23:42:17.828557Z INFO Daemon Daemon Detect protocol endpoint May 13 23:42:17.833904 waagent[1930]: 2025-05-13T23:42:17.833860Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:42:17.839576 waagent[1930]: 2025-05-13T23:42:17.839539Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 13 23:42:17.846655 waagent[1930]: 2025-05-13T23:42:17.846613Z INFO Daemon Daemon Test for route to 168.63.129.16 May 13 23:42:17.852174 waagent[1930]: 2025-05-13T23:42:17.852138Z INFO Daemon Daemon Route to 168.63.129.16 exists May 13 23:42:17.858106 waagent[1930]: 2025-05-13T23:42:17.858071Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 13 23:42:17.901124 waagent[1930]: 2025-05-13T23:42:17.901083Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 13 23:42:17.909330 waagent[1930]: 2025-05-13T23:42:17.909302Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 13 23:42:17.915958 waagent[1930]: 2025-05-13T23:42:17.915920Z INFO Daemon Daemon Server preferred version:2015-04-05 May 13 23:42:18.077663 waagent[1930]: 2025-05-13T23:42:18.077407Z INFO Daemon Daemon Initializing goal state during protocol detection May 13 23:42:18.085731 waagent[1930]: 2025-05-13T23:42:18.085673Z INFO Daemon Daemon Forcing an update of the goal state. May 13 23:42:18.096260 waagent[1930]: 2025-05-13T23:42:18.096214Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:42:18.118779 waagent[1930]: 2025-05-13T23:42:18.118743Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 13 23:42:18.125113 waagent[1930]: 2025-05-13T23:42:18.125073Z INFO Daemon May 13 23:42:18.129113 waagent[1930]: 2025-05-13T23:42:18.129073Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e6014a0e-e65d-402b-bace-ced0c6999de2 eTag: 12400384613178985733 source: Fabric] May 13 23:42:18.145504 waagent[1930]: 2025-05-13T23:42:18.145460Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 13 23:42:18.153785 waagent[1930]: 2025-05-13T23:42:18.153745Z INFO Daemon May 13 23:42:18.156897 waagent[1930]: 2025-05-13T23:42:18.156861Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 13 23:42:18.172061 waagent[1930]: 2025-05-13T23:42:18.172027Z INFO Daemon Daemon Downloading artifacts profile blob May 13 23:42:18.263799 waagent[1930]: 2025-05-13T23:42:18.263726Z INFO Daemon Downloaded certificate {'thumbprint': '871AEDEB83C93DA910E10E5E21B32D21A6B8BB07', 'hasPrivateKey': False} May 13 23:42:18.274735 waagent[1930]: 2025-05-13T23:42:18.274695Z INFO Daemon Downloaded certificate {'thumbprint': '0B578C39987D18560EC99028CCCD5492DCF1CA03', 'hasPrivateKey': True} May 13 23:42:18.286087 waagent[1930]: 2025-05-13T23:42:18.286045Z INFO Daemon Fetch goal state completed May 13 23:42:18.298774 waagent[1930]: 2025-05-13T23:42:18.298737Z INFO Daemon Daemon Starting provisioning May 13 23:42:18.304930 waagent[1930]: 2025-05-13T23:42:18.304889Z INFO Daemon Daemon Handle ovf-env.xml. May 13 23:42:18.310006 waagent[1930]: 2025-05-13T23:42:18.309963Z INFO Daemon Daemon Set hostname [ci-4284.0.0-n-5e434aba7d] May 13 23:42:18.329329 waagent[1930]: 2025-05-13T23:42:18.329266Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-n-5e434aba7d] May 13 23:42:18.336561 waagent[1930]: 2025-05-13T23:42:18.336510Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 13 23:42:18.343648 waagent[1930]: 2025-05-13T23:42:18.343584Z INFO Daemon Daemon Primary interface is [eth0] May 13 23:42:18.356872 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:18.356886 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:18.356912 systemd-networkd[1346]: eth0: DHCP lease lost May 13 23:42:18.357896 waagent[1930]: 2025-05-13T23:42:18.357816Z INFO Daemon Daemon Create user account if not exists May 13 23:42:18.363670 waagent[1930]: 2025-05-13T23:42:18.363626Z INFO Daemon Daemon User core already exists, skip useradd May 13 23:42:18.369302 waagent[1930]: 2025-05-13T23:42:18.369262Z INFO Daemon Daemon Configure sudoer May 13 23:42:18.374006 waagent[1930]: 2025-05-13T23:42:18.373961Z INFO Daemon Daemon Configure sshd May 13 23:42:18.378580 waagent[1930]: 2025-05-13T23:42:18.378535Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 13 23:42:18.395945 waagent[1930]: 2025-05-13T23:42:18.395770Z INFO Daemon Daemon Deploy ssh public key. May 13 23:42:18.416670 systemd-networkd[1346]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:42:19.481229 waagent[1930]: 2025-05-13T23:42:19.481180Z INFO Daemon Daemon Provisioning complete May 13 23:42:19.499596 waagent[1930]: 2025-05-13T23:42:19.499552Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 13 23:42:19.506592 waagent[1930]: 2025-05-13T23:42:19.506541Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 13 23:42:19.517504 waagent[1930]: 2025-05-13T23:42:19.517464Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 13 23:42:19.647621 waagent[1995]: 2025-05-13T23:42:19.647365Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 13 23:42:19.647621 waagent[1995]: 2025-05-13T23:42:19.647505Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 May 13 23:42:19.647621 waagent[1995]: 2025-05-13T23:42:19.647549Z INFO ExtHandler ExtHandler Python: 3.11.11 May 13 23:42:19.647621 waagent[1995]: 2025-05-13T23:42:19.647612Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 13 23:42:19.913309 waagent[1995]: 2025-05-13T23:42:19.913171Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 13 23:42:19.913475 waagent[1995]: 2025-05-13T23:42:19.913436Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:19.913525 waagent[1995]: 2025-05-13T23:42:19.913503Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:19.920532 waagent[1995]: 2025-05-13T23:42:19.920483Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:42:19.925970 waagent[1995]: 2025-05-13T23:42:19.925936Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 13 23:42:19.926411 waagent[1995]: 2025-05-13T23:42:19.926377Z INFO ExtHandler May 13 23:42:19.926473 waagent[1995]: 2025-05-13T23:42:19.926450Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 89dc9653-47a9-4c30-9e23-86cdbd8467ea eTag: 12400384613178985733 source: Fabric] May 13 23:42:19.926764 waagent[1995]: 2025-05-13T23:42:19.926731Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 13 23:42:19.927263 waagent[1995]: 2025-05-13T23:42:19.927225Z INFO ExtHandler May 13 23:42:19.927311 waagent[1995]: 2025-05-13T23:42:19.927289Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 13 23:42:19.940222 waagent[1995]: 2025-05-13T23:42:19.940189Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 13 23:42:20.026145 waagent[1995]: 2025-05-13T23:42:20.026064Z INFO ExtHandler Downloaded certificate {'thumbprint': '871AEDEB83C93DA910E10E5E21B32D21A6B8BB07', 'hasPrivateKey': False} May 13 23:42:20.026517 waagent[1995]: 2025-05-13T23:42:20.026481Z INFO ExtHandler Downloaded certificate {'thumbprint': '0B578C39987D18560EC99028CCCD5492DCF1CA03', 'hasPrivateKey': True} May 13 23:42:20.026927 waagent[1995]: 2025-05-13T23:42:20.026894Z INFO ExtHandler Fetch goal state completed May 13 23:42:20.042212 waagent[1995]: 2025-05-13T23:42:20.042158Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 13 23:42:20.046432 waagent[1995]: 2025-05-13T23:42:20.046379Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1995 May 13 23:42:20.046553 waagent[1995]: 2025-05-13T23:42:20.046522Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 13 23:42:20.046882 waagent[1995]: 2025-05-13T23:42:20.046850Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 13 23:42:20.048307 waagent[1995]: 2025-05-13T23:42:20.048269Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 13 23:42:20.048729 waagent[1995]: 2025-05-13T23:42:20.048696Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 13 23:42:20.048875 waagent[1995]: 2025-05-13T23:42:20.048847Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 13 23:42:20.049442 waagent[1995]: 2025-05-13T23:42:20.049409Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 13 23:42:20.068455 waagent[1995]: 2025-05-13T23:42:20.068415Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 13 23:42:20.068650 waagent[1995]: 2025-05-13T23:42:20.068619Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 13 23:42:20.073828 waagent[1995]: 2025-05-13T23:42:20.073798Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 13 23:42:20.082342 systemd[1]: Reload requested from client PID 2012 ('systemctl') (unit waagent.service)... May 13 23:42:20.082359 systemd[1]: Reloading... May 13 23:42:20.155625 zram_generator::config[2054]: No configuration found. May 13 23:42:20.263109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:20.367313 systemd[1]: Reloading finished in 284 ms. May 13 23:42:20.385547 waagent[1995]: 2025-05-13T23:42:20.380889Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 13 23:42:20.385547 waagent[1995]: 2025-05-13T23:42:20.381033Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 13 23:42:20.658027 waagent[1995]: 2025-05-13T23:42:20.657901Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 13 23:42:20.658302 waagent[1995]: 2025-05-13T23:42:20.658242Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 13 23:42:20.659040 waagent[1995]: 2025-05-13T23:42:20.658963Z INFO ExtHandler ExtHandler Starting env monitor service. May 13 23:42:20.659320 waagent[1995]: 2025-05-13T23:42:20.659095Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:20.659545 waagent[1995]: 2025-05-13T23:42:20.659500Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 13 23:42:20.659751 waagent[1995]: 2025-05-13T23:42:20.659561Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:20.659882 waagent[1995]: 2025-05-13T23:42:20.659849Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:20.659986 waagent[1995]: 2025-05-13T23:42:20.659922Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:20.660270 waagent[1995]: 2025-05-13T23:42:20.660208Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 13 23:42:20.660368 waagent[1995]: 2025-05-13T23:42:20.660320Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 13 23:42:20.660572 waagent[1995]: 2025-05-13T23:42:20.660529Z INFO EnvHandler ExtHandler Configure routes May 13 23:42:20.661072 waagent[1995]: 2025-05-13T23:42:20.661030Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 13 23:42:20.661268 waagent[1995]: 2025-05-13T23:42:20.661224Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 13 23:42:20.661367 waagent[1995]: 2025-05-13T23:42:20.661284Z INFO EnvHandler ExtHandler Gateway:None May 13 23:42:20.661406 waagent[1995]: 2025-05-13T23:42:20.661381Z INFO EnvHandler ExtHandler Routes:None May 13 23:42:20.662485 waagent[1995]: 2025-05-13T23:42:20.662438Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 13 23:42:20.664621 waagent[1995]: 2025-05-13T23:42:20.663015Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 13 23:42:20.664621 waagent[1995]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 13 23:42:20.664621 waagent[1995]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 13 23:42:20.664621 waagent[1995]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 13 23:42:20.664621 waagent[1995]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:20.664621 waagent[1995]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:20.664621 waagent[1995]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:20.664621 waagent[1995]: 2025-05-13T23:42:20.663153Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 13 23:42:20.672520 waagent[1995]: 2025-05-13T23:42:20.672463Z INFO ExtHandler ExtHandler May 13 23:42:20.672898 waagent[1995]: 2025-05-13T23:42:20.672853Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c1dffe72-fac8-46cb-9f93-e81aa21974d3 correlation c039632e-6edb-4cfd-aff9-dd4a1e928057 created: 2025-05-13T23:41:07.299995Z] May 13 23:42:20.673272 waagent[1995]: 2025-05-13T23:42:20.673233Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 13 23:42:20.673898 waagent[1995]: 2025-05-13T23:42:20.673863Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 13 23:42:20.724676 waagent[1995]: 2025-05-13T23:42:20.724623Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 550A31C5-0AFE-46F8-913F-66C924A0E086;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 13 23:42:20.742883 waagent[1995]: 2025-05-13T23:42:20.742826Z INFO MonitorHandler ExtHandler Network interfaces: May 13 23:42:20.742883 waagent[1995]: Executing ['ip', '-a', '-o', 'link']: May 13 23:42:20.742883 waagent[1995]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 13 23:42:20.742883 waagent[1995]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:81:40 brd ff:ff:ff:ff:ff:ff May 13 23:42:20.742883 waagent[1995]: 3: enP11417s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:81:40 brd ff:ff:ff:ff:ff:ff\ altname enP11417p0s2 May 13 23:42:20.742883 waagent[1995]: Executing ['ip', '-4', '-a', '-o', 'address']: May 13 23:42:20.742883 waagent[1995]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 13 23:42:20.742883 waagent[1995]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 13 23:42:20.742883 waagent[1995]: Executing ['ip', '-6', '-a', '-o', 'address']: May 13 23:42:20.742883 waagent[1995]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 13 23:42:20.742883 waagent[1995]: 2: eth0 inet6 fe80::20d:3aff:fef7:8140/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:42:20.742883 waagent[1995]: 3: enP11417s1 inet6 fe80::20d:3aff:fef7:8140/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:42:20.774015 waagent[1995]: 2025-05-13T23:42:20.773856Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 13 23:42:20.774015 waagent[1995]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:20.774015 waagent[1995]: pkts bytes target prot opt in out source destination May 13 23:42:20.774015 waagent[1995]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:20.774015 waagent[1995]: pkts bytes target prot opt in out source destination May 13 23:42:20.774015 waagent[1995]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:20.774015 waagent[1995]: pkts bytes target prot opt in out source destination May 13 23:42:20.774015 waagent[1995]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:42:20.774015 waagent[1995]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:42:20.774015 waagent[1995]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:42:20.777380 waagent[1995]: 2025-05-13T23:42:20.777331Z INFO EnvHandler ExtHandler Current Firewall rules: May 13 23:42:20.777380 waagent[1995]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:20.777380 waagent[1995]: pkts bytes target prot opt in out source destination May 13 23:42:20.777380 waagent[1995]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:20.777380 waagent[1995]: pkts bytes target prot opt in out source destination May 13 23:42:20.777380 waagent[1995]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:20.777380 waagent[1995]: pkts bytes target prot opt in out source destination May 13 23:42:20.777380 waagent[1995]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:42:20.777380 waagent[1995]: 3 364 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:42:20.777380 waagent[1995]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:42:20.777607 waagent[1995]: 2025-05-13T23:42:20.777566Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 13 23:42:24.050943 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:42:24.052218 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:46202.service - OpenSSH per-connection server daemon (10.200.16.10:46202). May 13 23:42:24.594388 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 46202 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:24.595654 sshd-session[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:24.600656 systemd-logind[1754]: New session 3 of user core. May 13 23:42:24.607752 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:42:25.000781 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:46218.service - OpenSSH per-connection server daemon (10.200.16.10:46218). May 13 23:42:25.509240 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 46218 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:25.510477 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:25.515632 systemd-logind[1754]: New session 4 of user core. May 13 23:42:25.521841 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:42:25.871954 sshd[2148]: Connection closed by 10.200.16.10 port 46218 May 13 23:42:25.871729 sshd-session[2146]: pam_unix(sshd:session): session closed for user core May 13 23:42:25.875025 systemd-logind[1754]: Session 4 logged out. Waiting for processes to exit. May 13 23:42:25.875270 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:46218.service: Deactivated successfully. May 13 23:42:25.876983 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:42:25.877904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:42:25.880245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:25.881014 systemd-logind[1754]: Removed session 4. May 13 23:42:25.958931 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:46226.service - OpenSSH per-connection server daemon (10.200.16.10:46226). May 13 23:42:25.999149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:26.008956 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:26.158070 kubelet[2164]: E0513 23:42:26.157934 2164 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:26.161105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:26.161244 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:26.161745 systemd[1]: kubelet.service: Consumed 129ms CPU time, 101.9M memory peak. May 13 23:42:26.458281 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 46226 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:26.459577 sshd-session[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:26.463668 systemd-logind[1754]: New session 5 of user core. May 13 23:42:26.469738 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:42:26.820839 sshd[2172]: Connection closed by 10.200.16.10 port 46226 May 13 23:42:26.819992 sshd-session[2157]: pam_unix(sshd:session): session closed for user core May 13 23:42:26.823071 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:46226.service: Deactivated successfully. May 13 23:42:26.824853 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:42:26.825768 systemd-logind[1754]: Session 5 logged out. Waiting for processes to exit. May 13 23:42:26.826577 systemd-logind[1754]: Removed session 5. May 13 23:42:26.908925 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:46238.service - OpenSSH per-connection server daemon (10.200.16.10:46238). May 13 23:42:27.402450 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 46238 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:27.403663 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:27.408647 systemd-logind[1754]: New session 6 of user core. May 13 23:42:27.413791 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:42:27.763816 sshd[2180]: Connection closed by 10.200.16.10 port 46238 May 13 23:42:27.764474 sshd-session[2178]: pam_unix(sshd:session): session closed for user core May 13 23:42:27.767504 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:46238.service: Deactivated successfully. May 13 23:42:27.770101 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:42:27.770985 systemd-logind[1754]: Session 6 logged out. Waiting for processes to exit. May 13 23:42:27.772009 systemd-logind[1754]: Removed session 6. May 13 23:42:27.849769 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:46250.service - OpenSSH per-connection server daemon (10.200.16.10:46250). May 13 23:42:28.312041 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 46250 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:28.313223 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:28.318487 systemd-logind[1754]: New session 7 of user core. May 13 23:42:28.323790 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:42:28.643225 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:42:28.643506 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:28.670751 sudo[2189]: pam_unix(sudo:session): session closed for user root May 13 23:42:28.740464 sshd[2188]: Connection closed by 10.200.16.10 port 46250 May 13 23:42:28.741199 sshd-session[2186]: pam_unix(sshd:session): session closed for user core May 13 23:42:28.744856 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:46250.service: Deactivated successfully. May 13 23:42:28.746336 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:42:28.747020 systemd-logind[1754]: Session 7 logged out. Waiting for processes to exit. May 13 23:42:28.748188 systemd-logind[1754]: Removed session 7. May 13 23:42:28.823903 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:49144.service - OpenSSH per-connection server daemon (10.200.16.10:49144). May 13 23:42:29.288268 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 49144 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:29.289475 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:29.293696 systemd-logind[1754]: New session 8 of user core. May 13 23:42:29.302748 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:42:29.546152 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:42:29.547006 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:29.550243 sudo[2199]: pam_unix(sudo:session): session closed for user root May 13 23:42:29.554719 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:42:29.554967 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:29.563639 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:29.599410 augenrules[2221]: No rules May 13 23:42:29.600860 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:29.601059 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:29.603949 sudo[2198]: pam_unix(sudo:session): session closed for user root May 13 23:42:29.691840 sshd[2197]: Connection closed by 10.200.16.10 port 49144 May 13 23:42:29.691748 sshd-session[2195]: pam_unix(sshd:session): session closed for user core May 13 23:42:29.694458 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:49144.service: Deactivated successfully. May 13 23:42:29.696169 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:42:29.697655 systemd-logind[1754]: Session 8 logged out. Waiting for processes to exit. May 13 23:42:29.698604 systemd-logind[1754]: Removed session 8. May 13 23:42:29.795471 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:49154.service - OpenSSH per-connection server daemon (10.200.16.10:49154). May 13 23:42:30.289030 sshd[2230]: Accepted publickey for core from 10.200.16.10 port 49154 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:30.293822 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:30.297880 systemd-logind[1754]: New session 9 of user core. May 13 23:42:30.307730 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:42:30.560709 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:42:30.560959 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:31.592165 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:42:31.602888 (dockerd)[2250]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:42:32.106696 dockerd[2250]: time="2025-05-13T23:42:32.106639480Z" level=info msg="Starting up" May 13 23:42:32.110282 dockerd[2250]: time="2025-05-13T23:42:32.110139200Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:42:32.256747 dockerd[2250]: time="2025-05-13T23:42:32.256683240Z" level=info msg="Loading containers: start." May 13 23:42:32.463823 kernel: Initializing XFRM netlink socket May 13 23:42:32.550236 systemd-networkd[1346]: docker0: Link UP May 13 23:42:32.636922 dockerd[2250]: time="2025-05-13T23:42:32.636879200Z" level=info msg="Loading containers: done." May 13 23:42:32.733974 dockerd[2250]: time="2025-05-13T23:42:32.733558800Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:42:32.733974 dockerd[2250]: time="2025-05-13T23:42:32.733716360Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:42:32.733974 dockerd[2250]: time="2025-05-13T23:42:32.733840280Z" level=info msg="Daemon has completed initialization" May 13 23:42:32.803898 dockerd[2250]: time="2025-05-13T23:42:32.803426960Z" level=info msg="API listen on /run/docker.sock" May 13 23:42:32.804065 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:42:33.612066 containerd[1805]: time="2025-05-13T23:42:33.612025720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:42:34.659913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651565162.mount: Deactivated successfully. May 13 23:42:36.072749 containerd[1805]: time="2025-05-13T23:42:36.072700800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:36.076844 containerd[1805]: time="2025-05-13T23:42:36.076793400Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 13 23:42:36.080900 containerd[1805]: time="2025-05-13T23:42:36.080869320Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:36.091738 containerd[1805]: time="2025-05-13T23:42:36.091694000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:36.093138 containerd[1805]: time="2025-05-13T23:42:36.092730480Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.4806642s" May 13 23:42:36.093138 containerd[1805]: time="2025-05-13T23:42:36.092763240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 23:42:36.093332 containerd[1805]: time="2025-05-13T23:42:36.093300200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:42:36.297868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:42:36.299274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:36.416793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:36.420055 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:36.476744 kubelet[2507]: E0513 23:42:36.476685 2507 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:36.479130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:36.479367 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:36.479913 systemd[1]: kubelet.service: Consumed 130ms CPU time, 102.9M memory peak. May 13 23:42:37.831399 containerd[1805]: time="2025-05-13T23:42:37.831347960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:37.835735 containerd[1805]: time="2025-05-13T23:42:37.835706440Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 13 23:42:37.839357 containerd[1805]: time="2025-05-13T23:42:37.839313920Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:37.848299 containerd[1805]: time="2025-05-13T23:42:37.848226200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:37.849328 containerd[1805]: time="2025-05-13T23:42:37.849213240Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.75587748s" May 13 23:42:37.849328 containerd[1805]: time="2025-05-13T23:42:37.849243520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 23:42:37.849781 containerd[1805]: time="2025-05-13T23:42:37.849753440Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:42:37.867396 chronyd[1737]: Selected source PHC0 May 13 23:42:39.405631 containerd[1805]: time="2025-05-13T23:42:39.405194446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:39.408907 containerd[1805]: time="2025-05-13T23:42:39.408696490Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 13 23:42:39.415321 containerd[1805]: time="2025-05-13T23:42:39.415271859Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:39.422466 containerd[1805]: time="2025-05-13T23:42:39.422416508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:39.423295 containerd[1805]: time="2025-05-13T23:42:39.423261669Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.573474829s" May 13 23:42:39.423360 containerd[1805]: time="2025-05-13T23:42:39.423294949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 23:42:39.424437 containerd[1805]: time="2025-05-13T23:42:39.424409871Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:42:40.687530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2865786038.mount: Deactivated successfully. May 13 23:42:41.065654 containerd[1805]: time="2025-05-13T23:42:41.065153835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:41.069431 containerd[1805]: time="2025-05-13T23:42:41.069225160Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 13 23:42:41.072922 containerd[1805]: time="2025-05-13T23:42:41.072858885Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:41.080065 containerd[1805]: time="2025-05-13T23:42:41.080000614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:41.080950 containerd[1805]: time="2025-05-13T23:42:41.080553535Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.656109584s" May 13 23:42:41.080950 containerd[1805]: time="2025-05-13T23:42:41.080614655Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 23:42:41.081187 containerd[1805]: time="2025-05-13T23:42:41.081157216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:42:41.820162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145411293.mount: Deactivated successfully. May 13 23:42:43.205628 containerd[1805]: time="2025-05-13T23:42:43.205172531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:43.208397 containerd[1805]: time="2025-05-13T23:42:43.208150055Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 13 23:42:43.216274 containerd[1805]: time="2025-05-13T23:42:43.216221065Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:43.225220 containerd[1805]: time="2025-05-13T23:42:43.225135637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:43.226233 containerd[1805]: time="2025-05-13T23:42:43.226102798Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.144907542s" May 13 23:42:43.226233 containerd[1805]: time="2025-05-13T23:42:43.226139078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 23:42:43.226716 containerd[1805]: time="2025-05-13T23:42:43.226527118Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:42:43.864916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028828631.mount: Deactivated successfully. May 13 23:42:43.900259 containerd[1805]: time="2025-05-13T23:42:43.900208632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:42:43.904193 containerd[1805]: time="2025-05-13T23:42:43.904141277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 13 23:42:43.909370 containerd[1805]: time="2025-05-13T23:42:43.909341004Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:42:43.918093 containerd[1805]: time="2025-05-13T23:42:43.918039095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:42:43.919126 containerd[1805]: time="2025-05-13T23:42:43.918694656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 692.136097ms" May 13 23:42:43.919126 containerd[1805]: time="2025-05-13T23:42:43.918725376Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:42:43.919336 containerd[1805]: time="2025-05-13T23:42:43.919296297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:42:45.716630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116275797.mount: Deactivated successfully. May 13 23:42:46.548553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 23:42:46.550212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:46.669319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:46.678012 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:47.482291 kubelet[2638]: E0513 23:42:47.006436 2638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:47.008585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:47.008747 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:47.009028 systemd[1]: kubelet.service: Consumed 132ms CPU time, 102.1M memory peak. May 13 23:42:48.437625 containerd[1805]: time="2025-05-13T23:42:48.436803979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:48.440994 containerd[1805]: time="2025-05-13T23:42:48.440725826Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 13 23:42:48.447333 containerd[1805]: time="2025-05-13T23:42:48.447257436Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:48.454980 containerd[1805]: time="2025-05-13T23:42:48.454928208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:48.456339 containerd[1805]: time="2025-05-13T23:42:48.456029970Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.536698553s" May 13 23:42:48.456339 containerd[1805]: time="2025-05-13T23:42:48.456069450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 23:42:53.335199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:53.335489 systemd[1]: kubelet.service: Consumed 132ms CPU time, 102.1M memory peak. May 13 23:42:53.338953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:53.363391 systemd[1]: Reload requested from client PID 2682 ('systemctl') (unit session-9.scope)... May 13 23:42:53.363516 systemd[1]: Reloading... May 13 23:42:53.471003 zram_generator::config[2729]: No configuration found. May 13 23:42:53.580788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:53.686081 systemd[1]: Reloading finished in 322 ms. May 13 23:42:53.730043 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:53.734173 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:42:53.734377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:53.734417 systemd[1]: kubelet.service: Consumed 93ms CPU time, 90.2M memory peak. May 13 23:42:53.735801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:53.855635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:53.862894 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:42:53.897568 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:42:53.897568 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:42:53.897568 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:42:53.897922 kubelet[2797]: I0513 23:42:53.897645 2797 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:42:54.485416 kubelet[2797]: I0513 23:42:54.485374 2797 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:42:54.485416 kubelet[2797]: I0513 23:42:54.485407 2797 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:42:54.485756 kubelet[2797]: I0513 23:42:54.485739 2797 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:42:54.507611 kubelet[2797]: I0513 23:42:54.507569 2797 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:42:54.510005 kubelet[2797]: E0513 23:42:54.509834 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:54.516876 kubelet[2797]: I0513 23:42:54.516845 2797 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:42:54.520218 kubelet[2797]: I0513 23:42:54.520192 2797 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:42:54.521797 kubelet[2797]: I0513 23:42:54.521760 2797 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:42:54.522003 kubelet[2797]: I0513 23:42:54.521800 2797 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-5e434aba7d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:42:54.522097 kubelet[2797]: I0513 23:42:54.522014 2797 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:42:54.522097 kubelet[2797]: I0513 23:42:54.522023 2797 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:42:54.522196 kubelet[2797]: I0513 23:42:54.522177 2797 state_mem.go:36] "Initialized new in-memory state store" May 13 23:42:54.525458 kubelet[2797]: I0513 23:42:54.525436 2797 kubelet.go:446] "Attempting to sync node with API server" May 13 23:42:54.525506 kubelet[2797]: I0513 23:42:54.525463 2797 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:42:54.525506 kubelet[2797]: I0513 23:42:54.525487 2797 kubelet.go:352] "Adding apiserver pod source" May 13 23:42:54.525506 kubelet[2797]: I0513 23:42:54.525500 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:42:54.529730 kubelet[2797]: I0513 23:42:54.529038 2797 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:42:54.529730 kubelet[2797]: I0513 23:42:54.529519 2797 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:42:54.529730 kubelet[2797]: W0513 23:42:54.529566 2797 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:42:54.530196 kubelet[2797]: I0513 23:42:54.530165 2797 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:42:54.530250 kubelet[2797]: I0513 23:42:54.530221 2797 server.go:1287] "Started kubelet" May 13 23:42:54.532405 kubelet[2797]: W0513 23:42:54.530348 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:54.532405 kubelet[2797]: E0513 23:42:54.530405 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:54.532516 kubelet[2797]: I0513 23:42:54.532492 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:42:54.533505 kubelet[2797]: W0513 23:42:54.533464 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-5e434aba7d&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:54.533659 kubelet[2797]: E0513 23:42:54.533638 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-5e434aba7d&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:54.534187 kubelet[2797]: I0513 23:42:54.534156 2797 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:42:54.534370 kubelet[2797]: E0513 23:42:54.534345 2797 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" May 13 23:42:54.534774 kubelet[2797]: I0513 23:42:54.534749 2797 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:42:54.535677 kubelet[2797]: I0513 23:42:54.535657 2797 server.go:490] "Adding debug handlers to kubelet server" May 13 23:42:54.536622 kubelet[2797]: I0513 23:42:54.536547 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:42:54.536899 kubelet[2797]: I0513 23:42:54.536882 2797 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:42:54.538902 kubelet[2797]: I0513 23:42:54.538878 2797 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:42:54.538971 kubelet[2797]: I0513 23:42:54.538939 2797 reconciler.go:26] "Reconciler: start to sync state" May 13 23:42:54.540213 kubelet[2797]: I0513 23:42:54.540190 2797 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:42:54.541945 kubelet[2797]: W0513 23:42:54.541653 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:54.542095 kubelet[2797]: E0513 23:42:54.542074 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:54.542310 kubelet[2797]: E0513 23:42:54.542204 2797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-5e434aba7d.183f3abf7b9e6ae0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-5e434aba7d,UID:ci-4284.0.0-n-5e434aba7d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-5e434aba7d,},FirstTimestamp:2025-05-13 23:42:54.530185952 +0000 UTC m=+0.664343831,LastTimestamp:2025-05-13 23:42:54.530185952 +0000 UTC m=+0.664343831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-5e434aba7d,}" May 13 23:42:54.542490 kubelet[2797]: E0513 23:42:54.542470 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-5e434aba7d?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" May 13 23:42:54.544450 kubelet[2797]: I0513 23:42:54.544431 2797 factory.go:221] Registration of the containerd container factory successfully May 13 23:42:54.544546 kubelet[2797]: I0513 23:42:54.544537 2797 factory.go:221] Registration of the systemd container factory successfully May 13 23:42:54.544696 kubelet[2797]: I0513 23:42:54.544677 2797 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:42:54.566253 kubelet[2797]: E0513 23:42:54.566215 2797 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:42:54.571503 kubelet[2797]: I0513 23:42:54.571479 2797 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:42:54.571503 kubelet[2797]: I0513 23:42:54.571496 2797 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:42:54.571659 kubelet[2797]: I0513 23:42:54.571515 2797 state_mem.go:36] "Initialized new in-memory state store" May 13 23:42:54.578621 kubelet[2797]: I0513 23:42:54.578584 2797 policy_none.go:49] "None policy: Start" May 13 23:42:54.578621 kubelet[2797]: I0513 23:42:54.578620 2797 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:42:54.578744 kubelet[2797]: I0513 23:42:54.578633 2797 state_mem.go:35] "Initializing new in-memory state store" May 13 23:42:54.588477 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:42:54.592797 kubelet[2797]: I0513 23:42:54.592758 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:42:54.594473 kubelet[2797]: I0513 23:42:54.594156 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:42:54.594473 kubelet[2797]: I0513 23:42:54.594182 2797 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:42:54.594473 kubelet[2797]: I0513 23:42:54.594200 2797 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:42:54.594473 kubelet[2797]: I0513 23:42:54.594206 2797 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:42:54.594473 kubelet[2797]: E0513 23:42:54.594242 2797 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:42:54.597032 kubelet[2797]: W0513 23:42:54.596990 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:54.597191 kubelet[2797]: E0513 23:42:54.597170 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:54.602102 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:42:54.604952 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:42:54.610372 kubelet[2797]: I0513 23:42:54.610337 2797 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:42:54.610545 kubelet[2797]: I0513 23:42:54.610525 2797 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:42:54.610576 kubelet[2797]: I0513 23:42:54.610543 2797 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:42:54.611153 kubelet[2797]: I0513 23:42:54.611132 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:42:54.613024 kubelet[2797]: E0513 23:42:54.612748 2797 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:42:54.613024 kubelet[2797]: E0513 23:42:54.612788 2797 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-5e434aba7d\" not found" May 13 23:42:54.704986 systemd[1]: Created slice kubepods-burstable-podb50d69e8045ff867c01d52744c7b6e2e.slice - libcontainer container kubepods-burstable-podb50d69e8045ff867c01d52744c7b6e2e.slice. May 13 23:42:54.713038 kubelet[2797]: I0513 23:42:54.712690 2797 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.713281 kubelet[2797]: E0513 23:42:54.713260 2797 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.715465 kubelet[2797]: E0513 23:42:54.715287 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.718446 systemd[1]: Created slice kubepods-burstable-pod3cfe37a1391ae262915799f160ea4d67.slice - libcontainer container kubepods-burstable-pod3cfe37a1391ae262915799f160ea4d67.slice. May 13 23:42:54.728989 kubelet[2797]: E0513 23:42:54.728798 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.731501 systemd[1]: Created slice kubepods-burstable-pod68f437b5a8cb8f2fd6f515be8ba55a6f.slice - libcontainer container kubepods-burstable-pod68f437b5a8cb8f2fd6f515be8ba55a6f.slice. May 13 23:42:54.733129 kubelet[2797]: E0513 23:42:54.732976 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.740478 kubelet[2797]: I0513 23:42:54.740402 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.740745 kubelet[2797]: I0513 23:42:54.740618 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.740745 kubelet[2797]: I0513 23:42:54.740643 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.740745 kubelet[2797]: I0513 23:42:54.740661 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b50d69e8045ff867c01d52744c7b6e2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" (UID: \"b50d69e8045ff867c01d52744c7b6e2e\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.740745 kubelet[2797]: I0513 23:42:54.740709 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.740745 kubelet[2797]: I0513 23:42:54.740726 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.741007 kubelet[2797]: I0513 23:42:54.740910 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68f437b5a8cb8f2fd6f515be8ba55a6f-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-5e434aba7d\" (UID: \"68f437b5a8cb8f2fd6f515be8ba55a6f\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.741007 kubelet[2797]: I0513 23:42:54.740935 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b50d69e8045ff867c01d52744c7b6e2e-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" (UID: \"b50d69e8045ff867c01d52744c7b6e2e\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.741007 kubelet[2797]: I0513 23:42:54.740949 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b50d69e8045ff867c01d52744c7b6e2e-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" (UID: \"b50d69e8045ff867c01d52744c7b6e2e\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.743062 kubelet[2797]: E0513 23:42:54.743027 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-5e434aba7d?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" May 13 23:42:54.915141 kubelet[2797]: I0513 23:42:54.915086 2797 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:54.915575 kubelet[2797]: E0513 23:42:54.915436 2797 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:55.017212 containerd[1805]: time="2025-05-13T23:42:55.016901193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-5e434aba7d,Uid:b50d69e8045ff867c01d52744c7b6e2e,Namespace:kube-system,Attempt:0,}" May 13 23:42:55.029717 containerd[1805]: time="2025-05-13T23:42:55.029526902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-5e434aba7d,Uid:3cfe37a1391ae262915799f160ea4d67,Namespace:kube-system,Attempt:0,}" May 13 23:42:55.034607 containerd[1805]: time="2025-05-13T23:42:55.034540154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-5e434aba7d,Uid:68f437b5a8cb8f2fd6f515be8ba55a6f,Namespace:kube-system,Attempt:0,}" May 13 23:42:55.144059 kubelet[2797]: E0513 23:42:55.144013 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-5e434aba7d?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" May 13 23:42:55.187259 containerd[1805]: time="2025-05-13T23:42:55.187104065Z" level=info msg="connecting to shim 8bbb1c6fd7a66061b7b4feb47c7c2d754dccc77fc38761cbdc98d47b2b2af565" address="unix:///run/containerd/s/c2c3f2d91f40fdb032992c1401df0485877ad6d9520c0435be487748b6fed081" namespace=k8s.io protocol=ttrpc version=3 May 13 23:42:55.202815 containerd[1805]: time="2025-05-13T23:42:55.202747541Z" level=info msg="connecting to shim d5412131346423d54b91e1a28edd98fddf626dc895212ca4f20cc2e2110a3cd6" address="unix:///run/containerd/s/dc6534b88c8e993e841c2f2f03ab68cb49a1715c90fb7ecfb4635679f11e92e9" namespace=k8s.io protocol=ttrpc version=3 May 13 23:42:55.207990 containerd[1805]: time="2025-05-13T23:42:55.207944673Z" level=info msg="connecting to shim 6d789e6d8efee096b9ecadad019a9593b0ed2f230bc1e251b8f51fc6a3ad6439" address="unix:///run/containerd/s/9b1c4eaaf11f45cf9a27a5f6bda38b3633d58fd30ce7e19be05a207fd722b64b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:42:55.235001 systemd[1]: Started cri-containerd-8bbb1c6fd7a66061b7b4feb47c7c2d754dccc77fc38761cbdc98d47b2b2af565.scope - libcontainer container 8bbb1c6fd7a66061b7b4feb47c7c2d754dccc77fc38761cbdc98d47b2b2af565. May 13 23:42:55.243352 systemd[1]: Started cri-containerd-6d789e6d8efee096b9ecadad019a9593b0ed2f230bc1e251b8f51fc6a3ad6439.scope - libcontainer container 6d789e6d8efee096b9ecadad019a9593b0ed2f230bc1e251b8f51fc6a3ad6439. May 13 23:42:55.251549 systemd[1]: Started cri-containerd-d5412131346423d54b91e1a28edd98fddf626dc895212ca4f20cc2e2110a3cd6.scope - libcontainer container d5412131346423d54b91e1a28edd98fddf626dc895212ca4f20cc2e2110a3cd6. May 13 23:42:55.299154 containerd[1805]: time="2025-05-13T23:42:55.299030203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-5e434aba7d,Uid:b50d69e8045ff867c01d52744c7b6e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bbb1c6fd7a66061b7b4feb47c7c2d754dccc77fc38761cbdc98d47b2b2af565\"" May 13 23:42:55.305576 containerd[1805]: time="2025-05-13T23:42:55.305527698Z" level=info msg="CreateContainer within sandbox \"8bbb1c6fd7a66061b7b4feb47c7c2d754dccc77fc38761cbdc98d47b2b2af565\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:42:55.324249 containerd[1805]: time="2025-05-13T23:42:55.324189981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-5e434aba7d,Uid:68f437b5a8cb8f2fd6f515be8ba55a6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5412131346423d54b91e1a28edd98fddf626dc895212ca4f20cc2e2110a3cd6\"" May 13 23:42:55.325032 kubelet[2797]: I0513 23:42:55.324659 2797 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:55.325032 kubelet[2797]: E0513 23:42:55.324986 2797 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:55.326642 containerd[1805]: time="2025-05-13T23:42:55.326611906Z" level=info msg="CreateContainer within sandbox \"d5412131346423d54b91e1a28edd98fddf626dc895212ca4f20cc2e2110a3cd6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:42:55.339997 containerd[1805]: time="2025-05-13T23:42:55.339893337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-5e434aba7d,Uid:3cfe37a1391ae262915799f160ea4d67,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d789e6d8efee096b9ecadad019a9593b0ed2f230bc1e251b8f51fc6a3ad6439\"" May 13 23:42:55.342417 containerd[1805]: time="2025-05-13T23:42:55.342383343Z" level=info msg="CreateContainer within sandbox \"6d789e6d8efee096b9ecadad019a9593b0ed2f230bc1e251b8f51fc6a3ad6439\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:42:55.363744 containerd[1805]: time="2025-05-13T23:42:55.363691312Z" level=info msg="Container 9889efa6242057f67ca4553d756a735f4753ac0063f754d440e28c6416801527: CDI devices from CRI Config.CDIDevices: []" May 13 23:42:55.382704 containerd[1805]: time="2025-05-13T23:42:55.382513915Z" level=info msg="Container 34f44993ba75dc543bda71f4d161f58a69ce93aef6dd247640de0b24f34cac1d: CDI devices from CRI Config.CDIDevices: []" May 13 23:42:55.489619 kubelet[2797]: W0513 23:42:55.489538 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-5e434aba7d&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:55.489619 kubelet[2797]: E0513 23:42:55.489624 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-5e434aba7d&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:55.528411 kubelet[2797]: W0513 23:42:55.528355 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:55.528411 kubelet[2797]: E0513 23:42:55.528408 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:55.737345 kubelet[2797]: W0513 23:42:55.737184 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:55.737345 kubelet[2797]: E0513 23:42:55.737243 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:55.769116 kubelet[2797]: W0513 23:42:55.769084 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused May 13 23:42:55.769198 kubelet[2797]: E0513 23:42:55.769126 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:55.944598 kubelet[2797]: E0513 23:42:55.944541 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-5e434aba7d?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" May 13 23:42:56.127625 kubelet[2797]: I0513 23:42:56.127229 2797 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:56.127625 kubelet[2797]: E0513 23:42:56.127581 2797 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:56.621233 kubelet[2797]: E0513 23:42:56.621195 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" May 13 23:42:56.696490 containerd[1805]: time="2025-05-13T23:42:56.695945540Z" level=info msg="CreateContainer within sandbox \"8bbb1c6fd7a66061b7b4feb47c7c2d754dccc77fc38761cbdc98d47b2b2af565\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9889efa6242057f67ca4553d756a735f4753ac0063f754d440e28c6416801527\"" May 13 23:42:56.696872 containerd[1805]: time="2025-05-13T23:42:56.696656061Z" level=info msg="StartContainer for \"9889efa6242057f67ca4553d756a735f4753ac0063f754d440e28c6416801527\"" May 13 23:42:56.713413 containerd[1805]: time="2025-05-13T23:42:56.713371420Z" level=info msg="connecting to shim 9889efa6242057f67ca4553d756a735f4753ac0063f754d440e28c6416801527" address="unix:///run/containerd/s/c2c3f2d91f40fdb032992c1401df0485877ad6d9520c0435be487748b6fed081" protocol=ttrpc version=3 May 13 23:42:56.720617 containerd[1805]: time="2025-05-13T23:42:56.718758552Z" level=info msg="Container d786da54235b0e056b14d6cf7448b7588cc4e4514a740c483ba3849a98d6e58d: CDI devices from CRI Config.CDIDevices: []" May 13 23:42:56.733576 containerd[1805]: time="2025-05-13T23:42:56.732543184Z" level=info msg="CreateContainer within sandbox \"d5412131346423d54b91e1a28edd98fddf626dc895212ca4f20cc2e2110a3cd6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"34f44993ba75dc543bda71f4d161f58a69ce93aef6dd247640de0b24f34cac1d\"" May 13 23:42:56.735353 containerd[1805]: time="2025-05-13T23:42:56.735223110Z" level=info msg="StartContainer for \"34f44993ba75dc543bda71f4d161f58a69ce93aef6dd247640de0b24f34cac1d\"" May 13 23:42:56.736758 containerd[1805]: time="2025-05-13T23:42:56.736662433Z" level=info msg="connecting to shim 34f44993ba75dc543bda71f4d161f58a69ce93aef6dd247640de0b24f34cac1d" address="unix:///run/containerd/s/dc6534b88c8e993e841c2f2f03ab68cb49a1715c90fb7ecfb4635679f11e92e9" protocol=ttrpc version=3 May 13 23:42:56.740273 systemd[1]: Started cri-containerd-9889efa6242057f67ca4553d756a735f4753ac0063f754d440e28c6416801527.scope - libcontainer container 9889efa6242057f67ca4553d756a735f4753ac0063f754d440e28c6416801527. May 13 23:42:56.757012 containerd[1805]: time="2025-05-13T23:42:56.756926600Z" level=info msg="CreateContainer within sandbox \"6d789e6d8efee096b9ecadad019a9593b0ed2f230bc1e251b8f51fc6a3ad6439\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d786da54235b0e056b14d6cf7448b7588cc4e4514a740c483ba3849a98d6e58d\"" May 13 23:42:56.759028 containerd[1805]: time="2025-05-13T23:42:56.758974525Z" level=info msg="StartContainer for \"d786da54235b0e056b14d6cf7448b7588cc4e4514a740c483ba3849a98d6e58d\"" May 13 23:42:56.760192 containerd[1805]: time="2025-05-13T23:42:56.760150567Z" level=info msg="connecting to shim d786da54235b0e056b14d6cf7448b7588cc4e4514a740c483ba3849a98d6e58d" address="unix:///run/containerd/s/9b1c4eaaf11f45cf9a27a5f6bda38b3633d58fd30ce7e19be05a207fd722b64b" protocol=ttrpc version=3 May 13 23:42:56.765904 systemd[1]: Started cri-containerd-34f44993ba75dc543bda71f4d161f58a69ce93aef6dd247640de0b24f34cac1d.scope - libcontainer container 34f44993ba75dc543bda71f4d161f58a69ce93aef6dd247640de0b24f34cac1d. May 13 23:42:56.792823 systemd[1]: Started cri-containerd-d786da54235b0e056b14d6cf7448b7588cc4e4514a740c483ba3849a98d6e58d.scope - libcontainer container d786da54235b0e056b14d6cf7448b7588cc4e4514a740c483ba3849a98d6e58d. May 13 23:42:56.826387 containerd[1805]: time="2025-05-13T23:42:56.826352120Z" level=info msg="StartContainer for \"9889efa6242057f67ca4553d756a735f4753ac0063f754d440e28c6416801527\" returns successfully" May 13 23:42:56.890261 containerd[1805]: time="2025-05-13T23:42:56.889953146Z" level=info msg="StartContainer for \"d786da54235b0e056b14d6cf7448b7588cc4e4514a740c483ba3849a98d6e58d\" returns successfully" May 13 23:42:56.891198 containerd[1805]: time="2025-05-13T23:42:56.891121229Z" level=info msg="StartContainer for \"34f44993ba75dc543bda71f4d161f58a69ce93aef6dd247640de0b24f34cac1d\" returns successfully" May 13 23:42:57.609140 kubelet[2797]: E0513 23:42:57.608900 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:57.614730 kubelet[2797]: E0513 23:42:57.613847 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:57.618330 kubelet[2797]: E0513 23:42:57.618183 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:57.731130 kubelet[2797]: I0513 23:42:57.730831 2797 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:58.127683 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 13 23:42:58.620311 kubelet[2797]: E0513 23:42:58.619881 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:58.620311 kubelet[2797]: E0513 23:42:58.620201 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:58.621699 kubelet[2797]: E0513 23:42:58.621683 2797 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.222027 kubelet[2797]: E0513 23:42:59.221982 2797 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-5e434aba7d\" not found" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.389287 kubelet[2797]: I0513 23:42:59.389241 2797 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.435268 kubelet[2797]: I0513 23:42:59.435223 2797 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.446810 kubelet[2797]: E0513 23:42:59.446767 2797 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-5e434aba7d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.446810 kubelet[2797]: I0513 23:42:59.446801 2797 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.448756 kubelet[2797]: E0513 23:42:59.448722 2797 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.448756 kubelet[2797]: I0513 23:42:59.448749 2797 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.450567 kubelet[2797]: E0513 23:42:59.450536 2797 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.530579 kubelet[2797]: I0513 23:42:59.530253 2797 apiserver.go:52] "Watching apiserver" May 13 23:42:59.539234 kubelet[2797]: I0513 23:42:59.539176 2797 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:42:59.618659 kubelet[2797]: I0513 23:42:59.618493 2797 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.618659 kubelet[2797]: I0513 23:42:59.618508 2797 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.621125 kubelet[2797]: E0513 23:42:59.621017 2797 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-5e434aba7d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.623336 kubelet[2797]: E0513 23:42:59.623152 2797 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:42:59.930238 update_engine[1760]: I20250513 23:42:59.929663 1760 update_attempter.cc:509] Updating boot flags... May 13 23:42:59.984712 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (3080) May 13 23:43:00.621803 kubelet[2797]: I0513 23:43:00.621531 2797 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:43:00.633091 kubelet[2797]: W0513 23:43:00.633013 2797 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:43:01.391534 systemd[1]: Reload requested from client PID 3129 ('systemctl') (unit session-9.scope)... May 13 23:43:01.391888 systemd[1]: Reloading... May 13 23:43:01.496672 zram_generator::config[3179]: No configuration found. May 13 23:43:01.600733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:43:01.720947 systemd[1]: Reloading finished in 328 ms. May 13 23:43:01.748230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:01.767328 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:43:01.769625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:01.769675 systemd[1]: kubelet.service: Consumed 1.049s CPU time, 126.6M memory peak. May 13 23:43:01.773838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:02.033075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:02.041017 (kubelet)[3240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:43:02.087855 kubelet[3240]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:02.087855 kubelet[3240]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:43:02.087855 kubelet[3240]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:02.088494 kubelet[3240]: I0513 23:43:02.087946 3240 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:43:02.094444 kubelet[3240]: I0513 23:43:02.094401 3240 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:43:02.094444 kubelet[3240]: I0513 23:43:02.094429 3240 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:43:02.094837 kubelet[3240]: I0513 23:43:02.094690 3240 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:43:02.096094 kubelet[3240]: I0513 23:43:02.096059 3240 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:43:02.098410 kubelet[3240]: I0513 23:43:02.098374 3240 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:43:02.104605 kubelet[3240]: I0513 23:43:02.104561 3240 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:43:02.109706 kubelet[3240]: I0513 23:43:02.109678 3240 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:43:02.109905 kubelet[3240]: I0513 23:43:02.109878 3240 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:43:02.110056 kubelet[3240]: I0513 23:43:02.109905 3240 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-5e434aba7d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:43:02.110157 kubelet[3240]: I0513 23:43:02.110066 3240 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:43:02.110157 kubelet[3240]: I0513 23:43:02.110075 3240 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:43:02.110157 kubelet[3240]: I0513 23:43:02.110116 3240 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:02.110262 kubelet[3240]: I0513 23:43:02.110229 3240 kubelet.go:446] "Attempting to sync node with API server" May 13 23:43:02.110262 kubelet[3240]: I0513 23:43:02.110240 3240 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:43:02.110262 kubelet[3240]: I0513 23:43:02.110260 3240 kubelet.go:352] "Adding apiserver pod source" May 13 23:43:02.110695 kubelet[3240]: I0513 23:43:02.110273 3240 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:43:02.111713 kubelet[3240]: I0513 23:43:02.111693 3240 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:43:02.112237 kubelet[3240]: I0513 23:43:02.112222 3240 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:43:02.114056 kubelet[3240]: I0513 23:43:02.114040 3240 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:43:02.114164 kubelet[3240]: I0513 23:43:02.114155 3240 server.go:1287] "Started kubelet" May 13 23:43:02.119470 kubelet[3240]: I0513 23:43:02.119454 3240 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:43:02.120651 kubelet[3240]: I0513 23:43:02.120180 3240 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:43:02.122891 kubelet[3240]: I0513 23:43:02.121092 3240 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:43:02.122891 kubelet[3240]: I0513 23:43:02.121323 3240 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:43:02.124550 kubelet[3240]: I0513 23:43:02.124533 3240 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:43:02.128383 kubelet[3240]: I0513 23:43:02.128337 3240 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:43:02.128562 kubelet[3240]: E0513 23:43:02.128525 3240 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-5e434aba7d\" not found" May 13 23:43:02.129049 kubelet[3240]: I0513 23:43:02.129025 3240 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:43:02.129199 kubelet[3240]: I0513 23:43:02.129150 3240 reconciler.go:26] "Reconciler: start to sync state" May 13 23:43:02.129906 kubelet[3240]: I0513 23:43:02.129886 3240 server.go:490] "Adding debug handlers to kubelet server" May 13 23:43:02.132445 kubelet[3240]: I0513 23:43:02.130021 3240 factory.go:221] Registration of the systemd container factory successfully May 13 23:43:02.132671 kubelet[3240]: I0513 23:43:02.132643 3240 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:43:02.135137 kubelet[3240]: E0513 23:43:02.134869 3240 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:43:02.135620 kubelet[3240]: I0513 23:43:02.135263 3240 factory.go:221] Registration of the containerd container factory successfully May 13 23:43:02.154736 kubelet[3240]: I0513 23:43:02.154696 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:43:02.158389 kubelet[3240]: I0513 23:43:02.158355 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:43:02.158389 kubelet[3240]: I0513 23:43:02.158382 3240 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:43:02.158496 kubelet[3240]: I0513 23:43:02.158400 3240 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:43:02.158496 kubelet[3240]: I0513 23:43:02.158407 3240 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:43:02.158496 kubelet[3240]: E0513 23:43:02.158444 3240 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:43:02.202093 kubelet[3240]: I0513 23:43:02.202063 3240 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:43:02.202093 kubelet[3240]: I0513 23:43:02.202083 3240 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:43:02.202233 kubelet[3240]: I0513 23:43:02.202104 3240 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:02.202296 kubelet[3240]: I0513 23:43:02.202274 3240 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:43:02.202324 kubelet[3240]: I0513 23:43:02.202293 3240 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:43:02.202324 kubelet[3240]: I0513 23:43:02.202317 3240 policy_none.go:49] "None policy: Start" May 13 23:43:02.202324 kubelet[3240]: I0513 23:43:02.202325 3240 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:43:02.202393 kubelet[3240]: I0513 23:43:02.202335 3240 state_mem.go:35] "Initializing new in-memory state store" May 13 23:43:02.202446 kubelet[3240]: I0513 23:43:02.202430 3240 state_mem.go:75] "Updated machine memory state" May 13 23:43:02.206493 kubelet[3240]: I0513 23:43:02.206473 3240 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:43:02.207229 kubelet[3240]: I0513 23:43:02.206896 3240 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:43:02.207229 kubelet[3240]: I0513 23:43:02.206912 3240 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:43:02.207229 kubelet[3240]: I0513 23:43:02.207121 3240 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:43:02.209346 kubelet[3240]: E0513 23:43:02.208393 3240 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:43:02.259897 kubelet[3240]: I0513 23:43:02.259801 3240 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.260268 kubelet[3240]: I0513 23:43:02.260241 3240 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.260615 kubelet[3240]: I0513 23:43:02.260446 3240 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.269373 kubelet[3240]: W0513 23:43:02.269327 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:43:02.274953 kubelet[3240]: W0513 23:43:02.274803 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:43:02.275527 kubelet[3240]: W0513 23:43:02.275506 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:43:02.275618 kubelet[3240]: E0513 23:43:02.275555 3240 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-5e434aba7d\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.309982 kubelet[3240]: I0513 23:43:02.309948 3240 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.323295 kubelet[3240]: I0513 23:43:02.323257 3240 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.323519 kubelet[3240]: I0513 23:43:02.323348 3240 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432207 kubelet[3240]: I0513 23:43:02.432158 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b50d69e8045ff867c01d52744c7b6e2e-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" (UID: \"b50d69e8045ff867c01d52744c7b6e2e\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432207 kubelet[3240]: I0513 23:43:02.432206 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b50d69e8045ff867c01d52744c7b6e2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" (UID: \"b50d69e8045ff867c01d52744c7b6e2e\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432392 kubelet[3240]: I0513 23:43:02.432234 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432392 kubelet[3240]: I0513 23:43:02.432251 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432392 kubelet[3240]: I0513 23:43:02.432286 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432392 kubelet[3240]: I0513 23:43:02.432303 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432392 kubelet[3240]: I0513 23:43:02.432319 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b50d69e8045ff867c01d52744c7b6e2e-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" (UID: \"b50d69e8045ff867c01d52744c7b6e2e\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432500 kubelet[3240]: I0513 23:43:02.432336 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cfe37a1391ae262915799f160ea4d67-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-5e434aba7d\" (UID: \"3cfe37a1391ae262915799f160ea4d67\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" May 13 23:43:02.432500 kubelet[3240]: I0513 23:43:02.432351 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68f437b5a8cb8f2fd6f515be8ba55a6f-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-5e434aba7d\" (UID: \"68f437b5a8cb8f2fd6f515be8ba55a6f\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" May 13 23:43:03.111202 kubelet[3240]: I0513 23:43:03.111139 3240 apiserver.go:52] "Watching apiserver" May 13 23:43:03.129287 kubelet[3240]: I0513 23:43:03.129220 3240 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:43:03.188163 kubelet[3240]: I0513 23:43:03.188114 3240 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:43:03.210851 kubelet[3240]: W0513 23:43:03.210813 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:43:03.210983 kubelet[3240]: E0513 23:43:03.210875 3240 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-5e434aba7d\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" May 13 23:43:03.232021 kubelet[3240]: I0513 23:43:03.231938 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-5e434aba7d" podStartSLOduration=3.231909822 podStartE2EDuration="3.231909822s" podCreationTimestamp="2025-05-13 23:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:03.231640221 +0000 UTC m=+1.186122817" watchObservedRunningTime="2025-05-13 23:43:03.231909822 +0000 UTC m=+1.186392418" May 13 23:43:03.281203 kubelet[3240]: I0513 23:43:03.280872 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-5e434aba7d" podStartSLOduration=1.280854463 podStartE2EDuration="1.280854463s" podCreationTimestamp="2025-05-13 23:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:03.260654133 +0000 UTC m=+1.215136729" watchObservedRunningTime="2025-05-13 23:43:03.280854463 +0000 UTC m=+1.235337059" May 13 23:43:06.988749 sudo[2233]: pam_unix(sudo:session): session closed for user root May 13 23:43:07.081367 sshd[2232]: Connection closed by 10.200.16.10 port 49154 May 13 23:43:07.082077 sshd-session[2230]: pam_unix(sshd:session): session closed for user core May 13 23:43:07.085353 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:49154.service: Deactivated successfully. May 13 23:43:07.088106 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:43:07.088311 systemd[1]: session-9.scope: Consumed 6.017s CPU time, 230.6M memory peak. May 13 23:43:07.091332 systemd-logind[1754]: Session 9 logged out. Waiting for processes to exit. May 13 23:43:07.092641 systemd-logind[1754]: Removed session 9. May 13 23:43:07.954871 kubelet[3240]: I0513 23:43:07.954779 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-5e434aba7d" podStartSLOduration=5.9546381539999995 podStartE2EDuration="5.954638154s" podCreationTimestamp="2025-05-13 23:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:03.281904225 +0000 UTC m=+1.236386821" watchObservedRunningTime="2025-05-13 23:43:07.954638154 +0000 UTC m=+5.909120750" May 13 23:43:08.231734 kubelet[3240]: I0513 23:43:08.229248 3240 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:43:08.231841 containerd[1805]: time="2025-05-13T23:43:08.229569314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:43:08.233333 kubelet[3240]: I0513 23:43:08.232311 3240 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:43:09.242239 systemd[1]: Created slice kubepods-besteffort-poddf44c0ec_ecb7_4e97_90b6_7b595bbd4948.slice - libcontainer container kubepods-besteffort-poddf44c0ec_ecb7_4e97_90b6_7b595bbd4948.slice. May 13 23:43:09.274797 kubelet[3240]: I0513 23:43:09.274668 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df44c0ec-ecb7-4e97-90b6-7b595bbd4948-xtables-lock\") pod \"kube-proxy-xwm57\" (UID: \"df44c0ec-ecb7-4e97-90b6-7b595bbd4948\") " pod="kube-system/kube-proxy-xwm57" May 13 23:43:09.274797 kubelet[3240]: I0513 23:43:09.274707 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df44c0ec-ecb7-4e97-90b6-7b595bbd4948-lib-modules\") pod \"kube-proxy-xwm57\" (UID: \"df44c0ec-ecb7-4e97-90b6-7b595bbd4948\") " pod="kube-system/kube-proxy-xwm57" May 13 23:43:09.274797 kubelet[3240]: I0513 23:43:09.274729 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn68v\" (UniqueName: \"kubernetes.io/projected/df44c0ec-ecb7-4e97-90b6-7b595bbd4948-kube-api-access-vn68v\") pod \"kube-proxy-xwm57\" (UID: \"df44c0ec-ecb7-4e97-90b6-7b595bbd4948\") " pod="kube-system/kube-proxy-xwm57" May 13 23:43:09.274797 kubelet[3240]: I0513 23:43:09.274750 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df44c0ec-ecb7-4e97-90b6-7b595bbd4948-kube-proxy\") pod \"kube-proxy-xwm57\" (UID: \"df44c0ec-ecb7-4e97-90b6-7b595bbd4948\") " pod="kube-system/kube-proxy-xwm57" May 13 23:43:09.356329 systemd[1]: Created slice kubepods-besteffort-pod008f3e3a_d631_4c71_b7a5_35430029fbc7.slice - libcontainer container kubepods-besteffort-pod008f3e3a_d631_4c71_b7a5_35430029fbc7.slice. May 13 23:43:09.375344 kubelet[3240]: I0513 23:43:09.375093 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/008f3e3a-d631-4c71-b7a5-35430029fbc7-var-lib-calico\") pod \"tigera-operator-789496d6f5-6sjmk\" (UID: \"008f3e3a-d631-4c71-b7a5-35430029fbc7\") " pod="tigera-operator/tigera-operator-789496d6f5-6sjmk" May 13 23:43:09.375612 kubelet[3240]: I0513 23:43:09.375238 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k97wc\" (UniqueName: \"kubernetes.io/projected/008f3e3a-d631-4c71-b7a5-35430029fbc7-kube-api-access-k97wc\") pod \"tigera-operator-789496d6f5-6sjmk\" (UID: \"008f3e3a-d631-4c71-b7a5-35430029fbc7\") " pod="tigera-operator/tigera-operator-789496d6f5-6sjmk" May 13 23:43:09.549134 containerd[1805]: time="2025-05-13T23:43:09.548951258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwm57,Uid:df44c0ec-ecb7-4e97-90b6-7b595bbd4948,Namespace:kube-system,Attempt:0,}" May 13 23:43:09.619930 containerd[1805]: time="2025-05-13T23:43:09.619652011Z" level=info msg="connecting to shim c6685c5ab4c573fc81d045ef03d09e8da6d1dc6001fb938c3581610c2e3ffc4b" address="unix:///run/containerd/s/6eac315107e2d90cb61f2653e507a5f5eceef99e644ae3a3d4898309c299399a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:09.638762 systemd[1]: Started cri-containerd-c6685c5ab4c573fc81d045ef03d09e8da6d1dc6001fb938c3581610c2e3ffc4b.scope - libcontainer container c6685c5ab4c573fc81d045ef03d09e8da6d1dc6001fb938c3581610c2e3ffc4b. May 13 23:43:09.661692 containerd[1805]: time="2025-05-13T23:43:09.661494799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-6sjmk,Uid:008f3e3a-d631-4c71-b7a5-35430029fbc7,Namespace:tigera-operator,Attempt:0,}" May 13 23:43:09.666977 containerd[1805]: time="2025-05-13T23:43:09.666796887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwm57,Uid:df44c0ec-ecb7-4e97-90b6-7b595bbd4948,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6685c5ab4c573fc81d045ef03d09e8da6d1dc6001fb938c3581610c2e3ffc4b\"" May 13 23:43:09.672626 containerd[1805]: time="2025-05-13T23:43:09.671193894Z" level=info msg="CreateContainer within sandbox \"c6685c5ab4c573fc81d045ef03d09e8da6d1dc6001fb938c3581610c2e3ffc4b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:43:09.726793 containerd[1805]: time="2025-05-13T23:43:09.726388343Z" level=info msg="Container 047d421bf2f3ed10a5448a0b8dac260bbd48604ccd91664039f3f51fd22663f2: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:09.775639 containerd[1805]: time="2025-05-13T23:43:09.774568620Z" level=info msg="CreateContainer within sandbox \"c6685c5ab4c573fc81d045ef03d09e8da6d1dc6001fb938c3581610c2e3ffc4b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"047d421bf2f3ed10a5448a0b8dac260bbd48604ccd91664039f3f51fd22663f2\"" May 13 23:43:09.777270 containerd[1805]: time="2025-05-13T23:43:09.776759984Z" level=info msg="StartContainer for \"047d421bf2f3ed10a5448a0b8dac260bbd48604ccd91664039f3f51fd22663f2\"" May 13 23:43:09.780064 containerd[1805]: time="2025-05-13T23:43:09.780030349Z" level=info msg="connecting to shim 047d421bf2f3ed10a5448a0b8dac260bbd48604ccd91664039f3f51fd22663f2" address="unix:///run/containerd/s/6eac315107e2d90cb61f2653e507a5f5eceef99e644ae3a3d4898309c299399a" protocol=ttrpc version=3 May 13 23:43:09.784167 containerd[1805]: time="2025-05-13T23:43:09.783806835Z" level=info msg="connecting to shim 806b4af57df0ab0b88c96031dd948d27b96ff4a2644e17e4b798d9c2a1460c26" address="unix:///run/containerd/s/2506708087b8ea02b1101e277818601027a4513631286267c09bd0434785f429" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:09.803755 systemd[1]: Started cri-containerd-047d421bf2f3ed10a5448a0b8dac260bbd48604ccd91664039f3f51fd22663f2.scope - libcontainer container 047d421bf2f3ed10a5448a0b8dac260bbd48604ccd91664039f3f51fd22663f2. May 13 23:43:09.815759 systemd[1]: Started cri-containerd-806b4af57df0ab0b88c96031dd948d27b96ff4a2644e17e4b798d9c2a1460c26.scope - libcontainer container 806b4af57df0ab0b88c96031dd948d27b96ff4a2644e17e4b798d9c2a1460c26. May 13 23:43:09.856509 containerd[1805]: time="2025-05-13T23:43:09.856010151Z" level=info msg="StartContainer for \"047d421bf2f3ed10a5448a0b8dac260bbd48604ccd91664039f3f51fd22663f2\" returns successfully" May 13 23:43:09.870338 containerd[1805]: time="2025-05-13T23:43:09.870208694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-6sjmk,Uid:008f3e3a-d631-4c71-b7a5-35430029fbc7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"806b4af57df0ab0b88c96031dd948d27b96ff4a2644e17e4b798d9c2a1460c26\"" May 13 23:43:09.872825 containerd[1805]: time="2025-05-13T23:43:09.872782018Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:43:10.221428 kubelet[3240]: I0513 23:43:10.220725 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xwm57" podStartSLOduration=1.220704338 podStartE2EDuration="1.220704338s" podCreationTimestamp="2025-05-13 23:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:10.220306618 +0000 UTC m=+8.174789214" watchObservedRunningTime="2025-05-13 23:43:10.220704338 +0000 UTC m=+8.175186934" May 13 23:43:11.841394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677064078.mount: Deactivated successfully. May 13 23:43:12.447080 containerd[1805]: time="2025-05-13T23:43:12.446779067Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:12.451186 containerd[1805]: time="2025-05-13T23:43:12.451003155Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 23:43:12.458192 containerd[1805]: time="2025-05-13T23:43:12.458134368Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:12.475121 containerd[1805]: time="2025-05-13T23:43:12.475044400Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:12.475873 containerd[1805]: time="2025-05-13T23:43:12.475754241Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.602933463s" May 13 23:43:12.475873 containerd[1805]: time="2025-05-13T23:43:12.475787321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 23:43:12.479367 containerd[1805]: time="2025-05-13T23:43:12.479332928Z" level=info msg="CreateContainer within sandbox \"806b4af57df0ab0b88c96031dd948d27b96ff4a2644e17e4b798d9c2a1460c26\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:43:12.522100 containerd[1805]: time="2025-05-13T23:43:12.522026367Z" level=info msg="Container 88b453e2306dad035da8b01698527302083451cad54fcff048207c24b81f188c: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:12.548853 containerd[1805]: time="2025-05-13T23:43:12.548743978Z" level=info msg="CreateContainer within sandbox \"806b4af57df0ab0b88c96031dd948d27b96ff4a2644e17e4b798d9c2a1460c26\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"88b453e2306dad035da8b01698527302083451cad54fcff048207c24b81f188c\"" May 13 23:43:12.549580 containerd[1805]: time="2025-05-13T23:43:12.549534859Z" level=info msg="StartContainer for \"88b453e2306dad035da8b01698527302083451cad54fcff048207c24b81f188c\"" May 13 23:43:12.550545 containerd[1805]: time="2025-05-13T23:43:12.550500141Z" level=info msg="connecting to shim 88b453e2306dad035da8b01698527302083451cad54fcff048207c24b81f188c" address="unix:///run/containerd/s/2506708087b8ea02b1101e277818601027a4513631286267c09bd0434785f429" protocol=ttrpc version=3 May 13 23:43:12.570770 systemd[1]: Started cri-containerd-88b453e2306dad035da8b01698527302083451cad54fcff048207c24b81f188c.scope - libcontainer container 88b453e2306dad035da8b01698527302083451cad54fcff048207c24b81f188c. May 13 23:43:12.606978 containerd[1805]: time="2025-05-13T23:43:12.606931366Z" level=info msg="StartContainer for \"88b453e2306dad035da8b01698527302083451cad54fcff048207c24b81f188c\" returns successfully" May 13 23:43:16.853445 kubelet[3240]: I0513 23:43:16.853240 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-6sjmk" podStartSLOduration=5.248499251 podStartE2EDuration="7.853220557s" podCreationTimestamp="2025-05-13 23:43:09 +0000 UTC" firstStartedPulling="2025-05-13 23:43:09.872083817 +0000 UTC m=+7.826566373" lastFinishedPulling="2025-05-13 23:43:12.476805083 +0000 UTC m=+10.431287679" observedRunningTime="2025-05-13 23:43:13.229142932 +0000 UTC m=+11.183625528" watchObservedRunningTime="2025-05-13 23:43:16.853220557 +0000 UTC m=+14.807703153" May 13 23:43:16.862172 systemd[1]: Created slice kubepods-besteffort-podcc4041f0_081d_49fe_b42b_e306764e98ed.slice - libcontainer container kubepods-besteffort-podcc4041f0_081d_49fe_b42b_e306764e98ed.slice. May 13 23:43:16.916642 kubelet[3240]: I0513 23:43:16.916583 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4041f0-081d-49fe-b42b-e306764e98ed-tigera-ca-bundle\") pod \"calico-typha-67d7bd4d74-pglnf\" (UID: \"cc4041f0-081d-49fe-b42b-e306764e98ed\") " pod="calico-system/calico-typha-67d7bd4d74-pglnf" May 13 23:43:16.917746 kubelet[3240]: I0513 23:43:16.917659 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc4041f0-081d-49fe-b42b-e306764e98ed-typha-certs\") pod \"calico-typha-67d7bd4d74-pglnf\" (UID: \"cc4041f0-081d-49fe-b42b-e306764e98ed\") " pod="calico-system/calico-typha-67d7bd4d74-pglnf" May 13 23:43:16.917746 kubelet[3240]: I0513 23:43:16.917696 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgr5n\" (UniqueName: \"kubernetes.io/projected/cc4041f0-081d-49fe-b42b-e306764e98ed-kube-api-access-kgr5n\") pod \"calico-typha-67d7bd4d74-pglnf\" (UID: \"cc4041f0-081d-49fe-b42b-e306764e98ed\") " pod="calico-system/calico-typha-67d7bd4d74-pglnf" May 13 23:43:16.979690 systemd[1]: Created slice kubepods-besteffort-pod19b2eae0_3ae2_409c_b6b4_0b92be88d01f.slice - libcontainer container kubepods-besteffort-pod19b2eae0_3ae2_409c_b6b4_0b92be88d01f.slice. May 13 23:43:17.018899 kubelet[3240]: I0513 23:43:17.018859 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-run-calico\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019080 kubelet[3240]: I0513 23:43:17.018928 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-flexvol-driver-host\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019080 kubelet[3240]: I0513 23:43:17.018951 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-lib-calico\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019080 kubelet[3240]: I0513 23:43:17.018985 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-log-dir\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019080 kubelet[3240]: I0513 23:43:17.019002 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-bin-dir\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019080 kubelet[3240]: I0513 23:43:17.019027 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-net-dir\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019193 kubelet[3240]: I0513 23:43:17.019050 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-tigera-ca-bundle\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019193 kubelet[3240]: I0513 23:43:17.019065 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vk4x\" (UniqueName: \"kubernetes.io/projected/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-kube-api-access-6vk4x\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019193 kubelet[3240]: I0513 23:43:17.019082 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-xtables-lock\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019193 kubelet[3240]: I0513 23:43:17.019099 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-node-certs\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019193 kubelet[3240]: I0513 23:43:17.019122 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-lib-modules\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.019296 kubelet[3240]: I0513 23:43:17.019137 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-policysync\") pod \"calico-node-c6ls4\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " pod="calico-system/calico-node-c6ls4" May 13 23:43:17.104788 kubelet[3240]: E0513 23:43:17.103374 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b2t" podUID="b076526f-88e8-4f5d-b600-5d379026eaec" May 13 23:43:17.120539 kubelet[3240]: I0513 23:43:17.119453 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b076526f-88e8-4f5d-b600-5d379026eaec-kubelet-dir\") pod \"csi-node-driver-q7b2t\" (UID: \"b076526f-88e8-4f5d-b600-5d379026eaec\") " pod="calico-system/csi-node-driver-q7b2t" May 13 23:43:17.120539 kubelet[3240]: I0513 23:43:17.119494 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b076526f-88e8-4f5d-b600-5d379026eaec-registration-dir\") pod \"csi-node-driver-q7b2t\" (UID: \"b076526f-88e8-4f5d-b600-5d379026eaec\") " pod="calico-system/csi-node-driver-q7b2t" May 13 23:43:17.120539 kubelet[3240]: I0513 23:43:17.119512 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74z9m\" (UniqueName: \"kubernetes.io/projected/b076526f-88e8-4f5d-b600-5d379026eaec-kube-api-access-74z9m\") pod \"csi-node-driver-q7b2t\" (UID: \"b076526f-88e8-4f5d-b600-5d379026eaec\") " pod="calico-system/csi-node-driver-q7b2t" May 13 23:43:17.120539 kubelet[3240]: I0513 23:43:17.119644 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b076526f-88e8-4f5d-b600-5d379026eaec-socket-dir\") pod \"csi-node-driver-q7b2t\" (UID: \"b076526f-88e8-4f5d-b600-5d379026eaec\") " pod="calico-system/csi-node-driver-q7b2t" May 13 23:43:17.120539 kubelet[3240]: I0513 23:43:17.119695 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b076526f-88e8-4f5d-b600-5d379026eaec-varrun\") pod \"csi-node-driver-q7b2t\" (UID: \"b076526f-88e8-4f5d-b600-5d379026eaec\") " pod="calico-system/csi-node-driver-q7b2t" May 13 23:43:17.123504 kubelet[3240]: E0513 23:43:17.123480 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.123647 kubelet[3240]: W0513 23:43:17.123632 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.123748 kubelet[3240]: E0513 23:43:17.123725 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.123958 kubelet[3240]: E0513 23:43:17.123946 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.124029 kubelet[3240]: W0513 23:43:17.124017 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.124086 kubelet[3240]: E0513 23:43:17.124073 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.124269 kubelet[3240]: E0513 23:43:17.124259 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.124334 kubelet[3240]: W0513 23:43:17.124324 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.124390 kubelet[3240]: E0513 23:43:17.124381 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.124673 kubelet[3240]: E0513 23:43:17.124660 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.124791 kubelet[3240]: W0513 23:43:17.124747 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.124791 kubelet[3240]: E0513 23:43:17.124764 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.128123 kubelet[3240]: E0513 23:43:17.128097 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.128251 kubelet[3240]: W0513 23:43:17.128206 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.128251 kubelet[3240]: E0513 23:43:17.128227 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.169886 containerd[1805]: time="2025-05-13T23:43:17.169841830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d7bd4d74-pglnf,Uid:cc4041f0-081d-49fe-b42b-e306764e98ed,Namespace:calico-system,Attempt:0,}" May 13 23:43:17.182950 kubelet[3240]: E0513 23:43:17.182450 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.182950 kubelet[3240]: W0513 23:43:17.182483 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.182950 kubelet[3240]: E0513 23:43:17.182504 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.221013 kubelet[3240]: E0513 23:43:17.220961 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.221280 kubelet[3240]: W0513 23:43:17.221146 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.221280 kubelet[3240]: E0513 23:43:17.221170 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.221578 kubelet[3240]: E0513 23:43:17.221465 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.221578 kubelet[3240]: W0513 23:43:17.221476 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.221578 kubelet[3240]: E0513 23:43:17.221487 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.222556 kubelet[3240]: E0513 23:43:17.222514 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.222556 kubelet[3240]: W0513 23:43:17.222531 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.222864 kubelet[3240]: E0513 23:43:17.222707 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.223628 kubelet[3240]: E0513 23:43:17.223610 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.223810 kubelet[3240]: W0513 23:43:17.223696 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.224054 kubelet[3240]: E0513 23:43:17.223884 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.224054 kubelet[3240]: E0513 23:43:17.223999 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.224054 kubelet[3240]: W0513 23:43:17.224007 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.224054 kubelet[3240]: E0513 23:43:17.224044 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.224729 kubelet[3240]: E0513 23:43:17.224642 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.224729 kubelet[3240]: W0513 23:43:17.224675 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.224729 kubelet[3240]: E0513 23:43:17.224713 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.225972 kubelet[3240]: E0513 23:43:17.225853 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.225972 kubelet[3240]: W0513 23:43:17.225867 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.225972 kubelet[3240]: E0513 23:43:17.225908 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.226250 kubelet[3240]: E0513 23:43:17.226184 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.226250 kubelet[3240]: W0513 23:43:17.226196 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.226443 kubelet[3240]: E0513 23:43:17.226349 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.227040 kubelet[3240]: E0513 23:43:17.226915 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.227040 kubelet[3240]: W0513 23:43:17.226942 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.227040 kubelet[3240]: E0513 23:43:17.226988 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.227886 kubelet[3240]: E0513 23:43:17.227274 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.227886 kubelet[3240]: W0513 23:43:17.227285 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.227886 kubelet[3240]: E0513 23:43:17.227326 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.229925 kubelet[3240]: E0513 23:43:17.228072 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.229925 kubelet[3240]: W0513 23:43:17.228085 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.229925 kubelet[3240]: E0513 23:43:17.228145 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.229925 kubelet[3240]: E0513 23:43:17.228399 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.229925 kubelet[3240]: W0513 23:43:17.228412 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.229925 kubelet[3240]: E0513 23:43:17.228520 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.229925 kubelet[3240]: E0513 23:43:17.228882 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.229925 kubelet[3240]: W0513 23:43:17.228893 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.229925 kubelet[3240]: E0513 23:43:17.228948 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.229925 kubelet[3240]: E0513 23:43:17.229103 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.230125 kubelet[3240]: W0513 23:43:17.229113 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.230125 kubelet[3240]: E0513 23:43:17.229372 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.230125 kubelet[3240]: E0513 23:43:17.229576 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.230125 kubelet[3240]: W0513 23:43:17.229610 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.230125 kubelet[3240]: E0513 23:43:17.229690 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.230866 kubelet[3240]: E0513 23:43:17.230485 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.230866 kubelet[3240]: W0513 23:43:17.230499 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.230866 kubelet[3240]: E0513 23:43:17.230533 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.230866 kubelet[3240]: E0513 23:43:17.230762 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.230866 kubelet[3240]: W0513 23:43:17.230772 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.230866 kubelet[3240]: E0513 23:43:17.230804 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.231340 kubelet[3240]: E0513 23:43:17.231310 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.231340 kubelet[3240]: W0513 23:43:17.231324 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.232067 kubelet[3240]: E0513 23:43:17.231918 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.232067 kubelet[3240]: E0513 23:43:17.231987 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.232067 kubelet[3240]: W0513 23:43:17.231994 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.232067 kubelet[3240]: E0513 23:43:17.232029 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.232650 kubelet[3240]: E0513 23:43:17.232517 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.232650 kubelet[3240]: W0513 23:43:17.232531 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.232650 kubelet[3240]: E0513 23:43:17.232621 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.233088 kubelet[3240]: E0513 23:43:17.232859 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.233088 kubelet[3240]: W0513 23:43:17.232871 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.233088 kubelet[3240]: E0513 23:43:17.232986 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.233962 kubelet[3240]: E0513 23:43:17.233866 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.233962 kubelet[3240]: W0513 23:43:17.233881 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.233962 kubelet[3240]: E0513 23:43:17.233913 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.235006 kubelet[3240]: E0513 23:43:17.234668 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.235006 kubelet[3240]: W0513 23:43:17.234683 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.235006 kubelet[3240]: E0513 23:43:17.234714 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.235381 kubelet[3240]: E0513 23:43:17.235364 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.235611 kubelet[3240]: W0513 23:43:17.235445 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.235611 kubelet[3240]: E0513 23:43:17.235472 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.236311 kubelet[3240]: E0513 23:43:17.235752 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.236311 kubelet[3240]: W0513 23:43:17.235764 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.236311 kubelet[3240]: E0513 23:43:17.235775 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.255084 kubelet[3240]: E0513 23:43:17.255053 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:17.255084 kubelet[3240]: W0513 23:43:17.255077 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:17.255232 kubelet[3240]: E0513 23:43:17.255106 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:17.258584 containerd[1805]: time="2025-05-13T23:43:17.258416636Z" level=info msg="connecting to shim 775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956" address="unix:///run/containerd/s/e28bd76327d09ca29328668e811ed685df4cc1c7937d68dfbb86e5563ff17678" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:17.283903 containerd[1805]: time="2025-05-13T23:43:17.283859564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c6ls4,Uid:19b2eae0-3ae2-409c-b6b4-0b92be88d01f,Namespace:calico-system,Attempt:0,}" May 13 23:43:17.286764 systemd[1]: Started cri-containerd-775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956.scope - libcontainer container 775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956. May 13 23:43:17.357239 containerd[1805]: time="2025-05-13T23:43:17.357120701Z" level=info msg="connecting to shim 4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a" address="unix:///run/containerd/s/051c342b4aca59dc1c41be23c79acb32f17565b7b5effda23ef982aff1edc278" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:17.359481 containerd[1805]: time="2025-05-13T23:43:17.359446425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d7bd4d74-pglnf,Uid:cc4041f0-081d-49fe-b42b-e306764e98ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\"" May 13 23:43:17.363289 containerd[1805]: time="2025-05-13T23:43:17.362842272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:43:17.384779 systemd[1]: Started cri-containerd-4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a.scope - libcontainer container 4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a. May 13 23:43:17.431142 containerd[1805]: time="2025-05-13T23:43:17.431073759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c6ls4,Uid:19b2eae0-3ae2-409c-b6b4-0b92be88d01f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\"" May 13 23:43:19.158968 kubelet[3240]: E0513 23:43:19.158921 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b2t" podUID="b076526f-88e8-4f5d-b600-5d379026eaec" May 13 23:43:19.194223 containerd[1805]: time="2025-05-13T23:43:19.193846210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:19.197118 containerd[1805]: time="2025-05-13T23:43:19.196945898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 23:43:19.205462 containerd[1805]: time="2025-05-13T23:43:19.205396000Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:19.214715 containerd[1805]: time="2025-05-13T23:43:19.214663744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:19.215615 containerd[1805]: time="2025-05-13T23:43:19.215543626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.852287834s" May 13 23:43:19.215615 containerd[1805]: time="2025-05-13T23:43:19.215581586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 23:43:19.217627 containerd[1805]: time="2025-05-13T23:43:19.217439351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:43:19.223253 containerd[1805]: time="2025-05-13T23:43:19.222263084Z" level=info msg="CreateContainer within sandbox \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:43:19.255790 containerd[1805]: time="2025-05-13T23:43:19.255738930Z" level=info msg="Container 9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:19.285625 containerd[1805]: time="2025-05-13T23:43:19.285553528Z" level=info msg="CreateContainer within sandbox \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\"" May 13 23:43:19.286444 containerd[1805]: time="2025-05-13T23:43:19.286419330Z" level=info msg="StartContainer for \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\"" May 13 23:43:19.287810 containerd[1805]: time="2025-05-13T23:43:19.287785293Z" level=info msg="connecting to shim 9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311" address="unix:///run/containerd/s/e28bd76327d09ca29328668e811ed685df4cc1c7937d68dfbb86e5563ff17678" protocol=ttrpc version=3 May 13 23:43:19.309743 systemd[1]: Started cri-containerd-9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311.scope - libcontainer container 9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311. May 13 23:43:19.350930 containerd[1805]: time="2025-05-13T23:43:19.350879817Z" level=info msg="StartContainer for \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" returns successfully" May 13 23:43:20.333163 kubelet[3240]: E0513 23:43:20.332998 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.333163 kubelet[3240]: W0513 23:43:20.333024 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.333163 kubelet[3240]: E0513 23:43:20.333047 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.333937 kubelet[3240]: E0513 23:43:20.333736 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.333937 kubelet[3240]: W0513 23:43:20.333751 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.333937 kubelet[3240]: E0513 23:43:20.333763 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.334196 kubelet[3240]: E0513 23:43:20.333986 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.334196 kubelet[3240]: W0513 23:43:20.333995 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.334196 kubelet[3240]: E0513 23:43:20.334014 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.334770 kubelet[3240]: E0513 23:43:20.334635 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.334770 kubelet[3240]: W0513 23:43:20.334649 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.334770 kubelet[3240]: E0513 23:43:20.334660 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.335105 kubelet[3240]: E0513 23:43:20.334888 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.335105 kubelet[3240]: W0513 23:43:20.334898 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.335105 kubelet[3240]: E0513 23:43:20.334908 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.335574 kubelet[3240]: E0513 23:43:20.335222 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.335574 kubelet[3240]: W0513 23:43:20.335235 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.335574 kubelet[3240]: E0513 23:43:20.335245 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.335856 kubelet[3240]: E0513 23:43:20.335745 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.335856 kubelet[3240]: W0513 23:43:20.335757 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.335856 kubelet[3240]: E0513 23:43:20.335769 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.336007 kubelet[3240]: E0513 23:43:20.335995 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.336329 kubelet[3240]: W0513 23:43:20.336225 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.336329 kubelet[3240]: E0513 23:43:20.336242 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.336534 kubelet[3240]: E0513 23:43:20.336522 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.336631 kubelet[3240]: W0513 23:43:20.336618 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.336702 kubelet[3240]: E0513 23:43:20.336690 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.337196 kubelet[3240]: E0513 23:43:20.337098 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.337196 kubelet[3240]: W0513 23:43:20.337111 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.337196 kubelet[3240]: E0513 23:43:20.337121 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.337361 kubelet[3240]: E0513 23:43:20.337350 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.337414 kubelet[3240]: W0513 23:43:20.337404 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.337609 kubelet[3240]: E0513 23:43:20.337467 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.337964 kubelet[3240]: E0513 23:43:20.337848 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.337964 kubelet[3240]: W0513 23:43:20.337860 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.337964 kubelet[3240]: E0513 23:43:20.337871 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.340081 kubelet[3240]: E0513 23:43:20.338683 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.340081 kubelet[3240]: W0513 23:43:20.338696 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.340081 kubelet[3240]: E0513 23:43:20.338707 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.340504 kubelet[3240]: E0513 23:43:20.340376 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.340504 kubelet[3240]: W0513 23:43:20.340391 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.340504 kubelet[3240]: E0513 23:43:20.340413 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.340779 kubelet[3240]: E0513 23:43:20.340683 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.340779 kubelet[3240]: W0513 23:43:20.340695 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.340779 kubelet[3240]: E0513 23:43:20.340705 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.347750 kubelet[3240]: E0513 23:43:20.347643 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.347750 kubelet[3240]: W0513 23:43:20.347715 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.347750 kubelet[3240]: E0513 23:43:20.347745 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.348396 kubelet[3240]: E0513 23:43:20.348332 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.348633 kubelet[3240]: W0513 23:43:20.348484 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.348633 kubelet[3240]: E0513 23:43:20.348521 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.349052 kubelet[3240]: E0513 23:43:20.349029 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.349052 kubelet[3240]: W0513 23:43:20.349047 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.349052 kubelet[3240]: E0513 23:43:20.349065 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.349778 kubelet[3240]: E0513 23:43:20.349648 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.349778 kubelet[3240]: W0513 23:43:20.349665 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.349778 kubelet[3240]: E0513 23:43:20.349687 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.349958 kubelet[3240]: E0513 23:43:20.349947 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.350014 kubelet[3240]: W0513 23:43:20.350003 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.350086 kubelet[3240]: E0513 23:43:20.350075 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.350513 kubelet[3240]: E0513 23:43:20.350486 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.350632 kubelet[3240]: W0513 23:43:20.350612 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.350632 kubelet[3240]: E0513 23:43:20.350641 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.351239 kubelet[3240]: E0513 23:43:20.351146 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.351239 kubelet[3240]: W0513 23:43:20.351162 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.351239 kubelet[3240]: E0513 23:43:20.351193 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.351665 kubelet[3240]: E0513 23:43:20.351491 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.351665 kubelet[3240]: W0513 23:43:20.351507 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.351665 kubelet[3240]: E0513 23:43:20.351620 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.351909 kubelet[3240]: E0513 23:43:20.351779 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.351909 kubelet[3240]: W0513 23:43:20.351795 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.351909 kubelet[3240]: E0513 23:43:20.351827 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.352013 kubelet[3240]: E0513 23:43:20.351995 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.352013 kubelet[3240]: W0513 23:43:20.352005 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.352051 kubelet[3240]: E0513 23:43:20.352020 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.352540 kubelet[3240]: E0513 23:43:20.352374 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.352540 kubelet[3240]: W0513 23:43:20.352388 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.352540 kubelet[3240]: E0513 23:43:20.352404 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.353085 kubelet[3240]: E0513 23:43:20.352953 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.353085 kubelet[3240]: W0513 23:43:20.352966 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.353085 kubelet[3240]: E0513 23:43:20.352989 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.353367 kubelet[3240]: E0513 23:43:20.353198 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.353367 kubelet[3240]: W0513 23:43:20.353214 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.353367 kubelet[3240]: E0513 23:43:20.353227 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.353468 kubelet[3240]: E0513 23:43:20.353458 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.353492 kubelet[3240]: W0513 23:43:20.353469 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.353492 kubelet[3240]: E0513 23:43:20.353479 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.353970 kubelet[3240]: E0513 23:43:20.353745 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.353970 kubelet[3240]: W0513 23:43:20.353760 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.353970 kubelet[3240]: E0513 23:43:20.353778 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.354486 kubelet[3240]: E0513 23:43:20.354364 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.354486 kubelet[3240]: W0513 23:43:20.354378 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.354486 kubelet[3240]: E0513 23:43:20.354399 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.354872 kubelet[3240]: E0513 23:43:20.354860 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.355182 kubelet[3240]: W0513 23:43:20.355057 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.355182 kubelet[3240]: E0513 23:43:20.355081 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.355366 kubelet[3240]: E0513 23:43:20.355304 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:20.355366 kubelet[3240]: W0513 23:43:20.355314 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:20.355366 kubelet[3240]: E0513 23:43:20.355324 3240 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:20.459237 containerd[1805]: time="2025-05-13T23:43:20.458272685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:20.461105 containerd[1805]: time="2025-05-13T23:43:20.461061412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 23:43:20.467201 containerd[1805]: time="2025-05-13T23:43:20.467173588Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:20.472840 containerd[1805]: time="2025-05-13T23:43:20.472813163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:20.473353 containerd[1805]: time="2025-05-13T23:43:20.473248124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.255775773s" May 13 23:43:20.473515 containerd[1805]: time="2025-05-13T23:43:20.473419964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 23:43:20.476858 containerd[1805]: time="2025-05-13T23:43:20.476826813Z" level=info msg="CreateContainer within sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:43:20.517459 containerd[1805]: time="2025-05-13T23:43:20.517404878Z" level=info msg="Container 0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:20.544892 containerd[1805]: time="2025-05-13T23:43:20.544786269Z" level=info msg="CreateContainer within sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\"" May 13 23:43:20.546630 containerd[1805]: time="2025-05-13T23:43:20.546088152Z" level=info msg="StartContainer for \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\"" May 13 23:43:20.548500 containerd[1805]: time="2025-05-13T23:43:20.548468759Z" level=info msg="connecting to shim 0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd" address="unix:///run/containerd/s/051c342b4aca59dc1c41be23c79acb32f17565b7b5effda23ef982aff1edc278" protocol=ttrpc version=3 May 13 23:43:20.568790 systemd[1]: Started cri-containerd-0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd.scope - libcontainer container 0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd. May 13 23:43:20.611659 containerd[1805]: time="2025-05-13T23:43:20.611539442Z" level=info msg="StartContainer for \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" returns successfully" May 13 23:43:20.615911 systemd[1]: cri-containerd-0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd.scope: Deactivated successfully. May 13 23:43:20.619154 containerd[1805]: time="2025-05-13T23:43:20.619009141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" id:\"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" pid:3841 exited_at:{seconds:1747179800 nanos:617989419}" May 13 23:43:20.619329 containerd[1805]: time="2025-05-13T23:43:20.619206302Z" level=info msg="received exit event container_id:\"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" id:\"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" pid:3841 exited_at:{seconds:1747179800 nanos:617989419}" May 13 23:43:20.638323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd-rootfs.mount: Deactivated successfully. May 13 23:43:21.159540 kubelet[3240]: E0513 23:43:21.159493 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b2t" podUID="b076526f-88e8-4f5d-b600-5d379026eaec" May 13 23:43:21.245115 kubelet[3240]: I0513 23:43:21.245080 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:21.263927 kubelet[3240]: I0513 23:43:21.263261 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67d7bd4d74-pglnf" podStartSLOduration=3.408570171 podStartE2EDuration="5.26324293s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="2025-05-13 23:43:17.36188439 +0000 UTC m=+15.316366986" lastFinishedPulling="2025-05-13 23:43:19.216557149 +0000 UTC m=+17.171039745" observedRunningTime="2025-05-13 23:43:20.257295644 +0000 UTC m=+18.211778240" watchObservedRunningTime="2025-05-13 23:43:21.26324293 +0000 UTC m=+19.217725526" May 13 23:43:22.253877 containerd[1805]: time="2025-05-13T23:43:22.253740495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:43:23.158958 kubelet[3240]: E0513 23:43:23.158898 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b2t" podUID="b076526f-88e8-4f5d-b600-5d379026eaec" May 13 23:43:25.159222 kubelet[3240]: E0513 23:43:25.159168 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b2t" podUID="b076526f-88e8-4f5d-b600-5d379026eaec" May 13 23:43:25.290963 containerd[1805]: time="2025-05-13T23:43:25.290914087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:25.294752 containerd[1805]: time="2025-05-13T23:43:25.294579294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 23:43:25.300573 containerd[1805]: time="2025-05-13T23:43:25.299743423Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:25.304288 containerd[1805]: time="2025-05-13T23:43:25.304259471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:25.304954 containerd[1805]: time="2025-05-13T23:43:25.304925112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.050662375s" May 13 23:43:25.305064 containerd[1805]: time="2025-05-13T23:43:25.305047792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 23:43:25.307294 containerd[1805]: time="2025-05-13T23:43:25.307265236Z" level=info msg="CreateContainer within sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:43:25.360112 containerd[1805]: time="2025-05-13T23:43:25.360060607Z" level=info msg="Container d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:25.363258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342340773.mount: Deactivated successfully. May 13 23:43:25.388087 containerd[1805]: time="2025-05-13T23:43:25.388043176Z" level=info msg="CreateContainer within sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\"" May 13 23:43:25.389785 containerd[1805]: time="2025-05-13T23:43:25.389747139Z" level=info msg="StartContainer for \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\"" May 13 23:43:25.391256 containerd[1805]: time="2025-05-13T23:43:25.391225061Z" level=info msg="connecting to shim d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50" address="unix:///run/containerd/s/051c342b4aca59dc1c41be23c79acb32f17565b7b5effda23ef982aff1edc278" protocol=ttrpc version=3 May 13 23:43:25.413794 systemd[1]: Started cri-containerd-d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50.scope - libcontainer container d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50. May 13 23:43:25.450822 containerd[1805]: time="2025-05-13T23:43:25.450697005Z" level=info msg="StartContainer for \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" returns successfully" May 13 23:43:25.621850 kubelet[3240]: I0513 23:43:25.621760 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:26.478084 containerd[1805]: time="2025-05-13T23:43:26.478037728Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:43:26.479644 systemd[1]: cri-containerd-d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50.scope: Deactivated successfully. May 13 23:43:26.480192 systemd[1]: cri-containerd-d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50.scope: Consumed 365ms CPU time, 168.9M memory peak, 150.3M written to disk. May 13 23:43:26.482005 containerd[1805]: time="2025-05-13T23:43:26.481954455Z" level=info msg="received exit event container_id:\"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" id:\"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" pid:3905 exited_at:{seconds:1747179806 nanos:481497974}" May 13 23:43:26.482209 containerd[1805]: time="2025-05-13T23:43:26.482181936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" id:\"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" pid:3905 exited_at:{seconds:1747179806 nanos:481497974}" May 13 23:43:26.502357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50-rootfs.mount: Deactivated successfully. May 13 23:43:26.509552 kubelet[3240]: I0513 23:43:26.509377 3240 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 23:43:26.558384 systemd[1]: Created slice kubepods-besteffort-pod5d642111_6cf3_4ceb_ae1d_8edd9a855efc.slice - libcontainer container kubepods-besteffort-pod5d642111_6cf3_4ceb_ae1d_8edd9a855efc.slice. May 13 23:43:26.576715 systemd[1]: Created slice kubepods-burstable-pod6b164de3_9bee_4959_b322_d0fa7d8307a9.slice - libcontainer container kubepods-burstable-pod6b164de3_9bee_4959_b322_d0fa7d8307a9.slice. May 13 23:43:26.893705 kubelet[3240]: W0513 23:43:26.567928 3240 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4284.0.0-n-5e434aba7d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284.0.0-n-5e434aba7d' and this object May 13 23:43:26.893705 kubelet[3240]: E0513 23:43:26.567964 3240 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4284.0.0-n-5e434aba7d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-5e434aba7d' and this object" logger="UnhandledError" May 13 23:43:26.893705 kubelet[3240]: I0513 23:43:26.584839 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-tigera-ca-bundle\") pod \"calico-kube-controllers-7b7dbd7c4d-d6t84\" (UID: \"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc\") " pod="calico-system/calico-kube-controllers-7b7dbd7c4d-d6t84" May 13 23:43:26.893705 kubelet[3240]: I0513 23:43:26.584871 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60effb1d-d7fc-4f4c-a43f-dd169b992307-config-volume\") pod \"coredns-668d6bf9bc-nz4c5\" (UID: \"60effb1d-d7fc-4f4c-a43f-dd169b992307\") " pod="kube-system/coredns-668d6bf9bc-nz4c5" May 13 23:43:26.893705 kubelet[3240]: I0513 23:43:26.584896 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d642111-6cf3-4ceb-ae1d-8edd9a855efc-calico-apiserver-certs\") pod \"calico-apiserver-9fc98d758-tglxs\" (UID: \"5d642111-6cf3-4ceb-ae1d-8edd9a855efc\") " pod="calico-apiserver/calico-apiserver-9fc98d758-tglxs" May 13 23:43:26.585033 systemd[1]: Created slice kubepods-besteffort-pod9f9d7e09_0870_4cc7_bce7_a13d0ea53ccc.slice - libcontainer container kubepods-besteffort-pod9f9d7e09_0870_4cc7_bce7_a13d0ea53ccc.slice. May 13 23:43:26.893925 kubelet[3240]: I0513 23:43:26.584912 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccpr\" (UniqueName: \"kubernetes.io/projected/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-kube-api-access-nccpr\") pod \"calico-kube-controllers-7b7dbd7c4d-d6t84\" (UID: \"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc\") " pod="calico-system/calico-kube-controllers-7b7dbd7c4d-d6t84" May 13 23:43:26.893925 kubelet[3240]: I0513 23:43:26.584929 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1d416f3-ba05-4865-b5c3-91cc08110d19-calico-apiserver-certs\") pod \"calico-apiserver-6d667d8584-nzwmm\" (UID: \"a1d416f3-ba05-4865-b5c3-91cc08110d19\") " pod="calico-apiserver/calico-apiserver-6d667d8584-nzwmm" May 13 23:43:26.893925 kubelet[3240]: I0513 23:43:26.584944 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssw8l\" (UniqueName: \"kubernetes.io/projected/a1d416f3-ba05-4865-b5c3-91cc08110d19-kube-api-access-ssw8l\") pod \"calico-apiserver-6d667d8584-nzwmm\" (UID: \"a1d416f3-ba05-4865-b5c3-91cc08110d19\") " pod="calico-apiserver/calico-apiserver-6d667d8584-nzwmm" May 13 23:43:26.893925 kubelet[3240]: I0513 23:43:26.584960 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/879c42d3-a34a-4e3d-babc-43636a4eac7f-calico-apiserver-certs\") pod \"calico-apiserver-6d667d8584-wmdz5\" (UID: \"879c42d3-a34a-4e3d-babc-43636a4eac7f\") " pod="calico-apiserver/calico-apiserver-6d667d8584-wmdz5" May 13 23:43:26.893925 kubelet[3240]: I0513 23:43:26.584977 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxdc\" (UniqueName: \"kubernetes.io/projected/879c42d3-a34a-4e3d-babc-43636a4eac7f-kube-api-access-5sxdc\") pod \"calico-apiserver-6d667d8584-wmdz5\" (UID: \"879c42d3-a34a-4e3d-babc-43636a4eac7f\") " pod="calico-apiserver/calico-apiserver-6d667d8584-wmdz5" May 13 23:43:26.596155 systemd[1]: Created slice kubepods-burstable-pod60effb1d_d7fc_4f4c_a43f_dd169b992307.slice - libcontainer container kubepods-burstable-pod60effb1d_d7fc_4f4c_a43f_dd169b992307.slice. May 13 23:43:26.894077 kubelet[3240]: I0513 23:43:26.584996 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlfk\" (UniqueName: \"kubernetes.io/projected/5d642111-6cf3-4ceb-ae1d-8edd9a855efc-kube-api-access-nqlfk\") pod \"calico-apiserver-9fc98d758-tglxs\" (UID: \"5d642111-6cf3-4ceb-ae1d-8edd9a855efc\") " pod="calico-apiserver/calico-apiserver-9fc98d758-tglxs" May 13 23:43:26.894077 kubelet[3240]: I0513 23:43:26.585029 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b164de3-9bee-4959-b322-d0fa7d8307a9-config-volume\") pod \"coredns-668d6bf9bc-7gc9s\" (UID: \"6b164de3-9bee-4959-b322-d0fa7d8307a9\") " pod="kube-system/coredns-668d6bf9bc-7gc9s" May 13 23:43:26.894077 kubelet[3240]: I0513 23:43:26.585048 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r78\" (UniqueName: \"kubernetes.io/projected/6b164de3-9bee-4959-b322-d0fa7d8307a9-kube-api-access-q9r78\") pod \"coredns-668d6bf9bc-7gc9s\" (UID: \"6b164de3-9bee-4959-b322-d0fa7d8307a9\") " pod="kube-system/coredns-668d6bf9bc-7gc9s" May 13 23:43:26.894077 kubelet[3240]: I0513 23:43:26.585066 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmfj8\" (UniqueName: \"kubernetes.io/projected/60effb1d-d7fc-4f4c-a43f-dd169b992307-kube-api-access-rmfj8\") pod \"coredns-668d6bf9bc-nz4c5\" (UID: \"60effb1d-d7fc-4f4c-a43f-dd169b992307\") " pod="kube-system/coredns-668d6bf9bc-nz4c5" May 13 23:43:26.605904 systemd[1]: Created slice kubepods-besteffort-pod879c42d3_a34a_4e3d_babc_43636a4eac7f.slice - libcontainer container kubepods-besteffort-pod879c42d3_a34a_4e3d_babc_43636a4eac7f.slice. May 13 23:43:26.615794 systemd[1]: Created slice kubepods-besteffort-poda1d416f3_ba05_4865_b5c3_91cc08110d19.slice - libcontainer container kubepods-besteffort-poda1d416f3_ba05_4865_b5c3_91cc08110d19.slice. May 13 23:43:27.164151 systemd[1]: Created slice kubepods-besteffort-podb076526f_88e8_4f5d_b600_5d379026eaec.slice - libcontainer container kubepods-besteffort-podb076526f_88e8_4f5d_b600_5d379026eaec.slice. May 13 23:43:27.167370 containerd[1805]: time="2025-05-13T23:43:27.167119871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b2t,Uid:b076526f-88e8-4f5d-b600-5d379026eaec,Namespace:calico-system,Attempt:0,}" May 13 23:43:27.198435 containerd[1805]: time="2025-05-13T23:43:27.198397447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fc98d758-tglxs,Uid:5d642111-6cf3-4ceb-ae1d-8edd9a855efc,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:27.198974 containerd[1805]: time="2025-05-13T23:43:27.198762447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7dbd7c4d-d6t84,Uid:9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc,Namespace:calico-system,Attempt:0,}" May 13 23:43:27.202885 containerd[1805]: time="2025-05-13T23:43:27.201621332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-wmdz5,Uid:879c42d3-a34a-4e3d-babc-43636a4eac7f,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:27.203162 containerd[1805]: time="2025-05-13T23:43:27.203142895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-nzwmm,Uid:a1d416f3-ba05-4865-b5c3-91cc08110d19,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:27.686991 kubelet[3240]: E0513 23:43:27.686627 3240 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 13 23:43:27.686991 kubelet[3240]: E0513 23:43:27.686769 3240 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b164de3-9bee-4959-b322-d0fa7d8307a9-config-volume podName:6b164de3-9bee-4959-b322-d0fa7d8307a9 nodeName:}" failed. No retries permitted until 2025-05-13 23:43:28.186740873 +0000 UTC m=+26.141223509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6b164de3-9bee-4959-b322-d0fa7d8307a9-config-volume") pod "coredns-668d6bf9bc-7gc9s" (UID: "6b164de3-9bee-4959-b322-d0fa7d8307a9") : failed to sync configmap cache: timed out waiting for the condition May 13 23:43:27.699652 kubelet[3240]: E0513 23:43:27.686584 3240 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 13 23:43:27.699652 kubelet[3240]: E0513 23:43:27.688337 3240 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/60effb1d-d7fc-4f4c-a43f-dd169b992307-config-volume podName:60effb1d-d7fc-4f4c-a43f-dd169b992307 nodeName:}" failed. No retries permitted until 2025-05-13 23:43:28.188321436 +0000 UTC m=+26.142804032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/60effb1d-d7fc-4f4c-a43f-dd169b992307-config-volume") pod "coredns-668d6bf9bc-nz4c5" (UID: "60effb1d-d7fc-4f4c-a43f-dd169b992307") : failed to sync configmap cache: timed out waiting for the condition May 13 23:43:27.822323 containerd[1805]: time="2025-05-13T23:43:27.821869353Z" level=error msg="Failed to destroy network for sandbox \"317849a6097eca0751dd825cfca7a9c053d0fa08532eeb8689e67a143eec872d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.831506 containerd[1805]: time="2025-05-13T23:43:27.830245888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b2t,Uid:b076526f-88e8-4f5d-b600-5d379026eaec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"317849a6097eca0751dd825cfca7a9c053d0fa08532eeb8689e67a143eec872d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.831882 kubelet[3240]: E0513 23:43:27.830498 3240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317849a6097eca0751dd825cfca7a9c053d0fa08532eeb8689e67a143eec872d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.831882 kubelet[3240]: E0513 23:43:27.830564 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317849a6097eca0751dd825cfca7a9c053d0fa08532eeb8689e67a143eec872d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7b2t" May 13 23:43:27.831882 kubelet[3240]: E0513 23:43:27.830582 3240 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317849a6097eca0751dd825cfca7a9c053d0fa08532eeb8689e67a143eec872d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7b2t" May 13 23:43:27.831981 kubelet[3240]: E0513 23:43:27.830634 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q7b2t_calico-system(b076526f-88e8-4f5d-b600-5d379026eaec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q7b2t_calico-system(b076526f-88e8-4f5d-b600-5d379026eaec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"317849a6097eca0751dd825cfca7a9c053d0fa08532eeb8689e67a143eec872d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q7b2t" podUID="b076526f-88e8-4f5d-b600-5d379026eaec" May 13 23:43:27.850028 containerd[1805]: time="2025-05-13T23:43:27.849762603Z" level=error msg="Failed to destroy network for sandbox \"0ee5f712765c23d623dd38315168f5935bd49ef0e56359ba99bad3d3f2f354e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.854776 containerd[1805]: time="2025-05-13T23:43:27.854570451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7dbd7c4d-d6t84,Uid:9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5f712765c23d623dd38315168f5935bd49ef0e56359ba99bad3d3f2f354e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.854976 kubelet[3240]: E0513 23:43:27.854852 3240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5f712765c23d623dd38315168f5935bd49ef0e56359ba99bad3d3f2f354e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.854976 kubelet[3240]: E0513 23:43:27.854905 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5f712765c23d623dd38315168f5935bd49ef0e56359ba99bad3d3f2f354e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7dbd7c4d-d6t84" May 13 23:43:27.854976 kubelet[3240]: E0513 23:43:27.854924 3240 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5f712765c23d623dd38315168f5935bd49ef0e56359ba99bad3d3f2f354e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7dbd7c4d-d6t84" May 13 23:43:27.855204 kubelet[3240]: E0513 23:43:27.854973 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b7dbd7c4d-d6t84_calico-system(9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b7dbd7c4d-d6t84_calico-system(9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ee5f712765c23d623dd38315168f5935bd49ef0e56359ba99bad3d3f2f354e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b7dbd7c4d-d6t84" podUID="9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc" May 13 23:43:27.868984 containerd[1805]: time="2025-05-13T23:43:27.868934557Z" level=error msg="Failed to destroy network for sandbox \"cbb1a90a81d42ea91a9824cda1c9d309223f5be6f79218c46c59e26d4dbffa99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.876603 containerd[1805]: time="2025-05-13T23:43:27.876015609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-wmdz5,Uid:879c42d3-a34a-4e3d-babc-43636a4eac7f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb1a90a81d42ea91a9824cda1c9d309223f5be6f79218c46c59e26d4dbffa99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.877880 kubelet[3240]: E0513 23:43:27.876226 3240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb1a90a81d42ea91a9824cda1c9d309223f5be6f79218c46c59e26d4dbffa99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.877880 kubelet[3240]: E0513 23:43:27.876276 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb1a90a81d42ea91a9824cda1c9d309223f5be6f79218c46c59e26d4dbffa99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d667d8584-wmdz5" May 13 23:43:27.877880 kubelet[3240]: E0513 23:43:27.876311 3240 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb1a90a81d42ea91a9824cda1c9d309223f5be6f79218c46c59e26d4dbffa99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d667d8584-wmdz5" May 13 23:43:27.878010 kubelet[3240]: E0513 23:43:27.876352 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d667d8584-wmdz5_calico-apiserver(879c42d3-a34a-4e3d-babc-43636a4eac7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d667d8584-wmdz5_calico-apiserver(879c42d3-a34a-4e3d-babc-43636a4eac7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbb1a90a81d42ea91a9824cda1c9d309223f5be6f79218c46c59e26d4dbffa99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d667d8584-wmdz5" podUID="879c42d3-a34a-4e3d-babc-43636a4eac7f" May 13 23:43:27.882503 containerd[1805]: time="2025-05-13T23:43:27.882264060Z" level=error msg="Failed to destroy network for sandbox \"1f1b143a31d16323078274c1c9707f337c4aaeae0e51f11548553d2362488b9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.887038 containerd[1805]: time="2025-05-13T23:43:27.886980549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-nzwmm,Uid:a1d416f3-ba05-4865-b5c3-91cc08110d19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1b143a31d16323078274c1c9707f337c4aaeae0e51f11548553d2362488b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.887663 kubelet[3240]: E0513 23:43:27.887376 3240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1b143a31d16323078274c1c9707f337c4aaeae0e51f11548553d2362488b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.887663 kubelet[3240]: E0513 23:43:27.887431 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1b143a31d16323078274c1c9707f337c4aaeae0e51f11548553d2362488b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d667d8584-nzwmm" May 13 23:43:27.887663 kubelet[3240]: E0513 23:43:27.887448 3240 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1b143a31d16323078274c1c9707f337c4aaeae0e51f11548553d2362488b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d667d8584-nzwmm" May 13 23:43:27.887793 kubelet[3240]: E0513 23:43:27.887484 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d667d8584-nzwmm_calico-apiserver(a1d416f3-ba05-4865-b5c3-91cc08110d19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d667d8584-nzwmm_calico-apiserver(a1d416f3-ba05-4865-b5c3-91cc08110d19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f1b143a31d16323078274c1c9707f337c4aaeae0e51f11548553d2362488b9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d667d8584-nzwmm" podUID="a1d416f3-ba05-4865-b5c3-91cc08110d19" May 13 23:43:27.890612 containerd[1805]: time="2025-05-13T23:43:27.890398555Z" level=error msg="Failed to destroy network for sandbox \"c4b29c2a64c8fa9f1613961649595c5bca3ca2f01731c20b0fade75b4f5d4387\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.894641 containerd[1805]: time="2025-05-13T23:43:27.894612082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fc98d758-tglxs,Uid:5d642111-6cf3-4ceb-ae1d-8edd9a855efc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4b29c2a64c8fa9f1613961649595c5bca3ca2f01731c20b0fade75b4f5d4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.895122 kubelet[3240]: E0513 23:43:27.894858 3240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4b29c2a64c8fa9f1613961649595c5bca3ca2f01731c20b0fade75b4f5d4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:27.895122 kubelet[3240]: E0513 23:43:27.894901 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4b29c2a64c8fa9f1613961649595c5bca3ca2f01731c20b0fade75b4f5d4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9fc98d758-tglxs" May 13 23:43:27.895122 kubelet[3240]: E0513 23:43:27.894921 3240 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4b29c2a64c8fa9f1613961649595c5bca3ca2f01731c20b0fade75b4f5d4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9fc98d758-tglxs" May 13 23:43:27.895241 kubelet[3240]: E0513 23:43:27.894950 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9fc98d758-tglxs_calico-apiserver(5d642111-6cf3-4ceb-ae1d-8edd9a855efc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9fc98d758-tglxs_calico-apiserver(5d642111-6cf3-4ceb-ae1d-8edd9a855efc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4b29c2a64c8fa9f1613961649595c5bca3ca2f01731c20b0fade75b4f5d4387\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9fc98d758-tglxs" podUID="5d642111-6cf3-4ceb-ae1d-8edd9a855efc" May 13 23:43:28.273784 containerd[1805]: time="2025-05-13T23:43:28.273717355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:43:28.399042 containerd[1805]: time="2025-05-13T23:43:28.398739497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7gc9s,Uid:6b164de3-9bee-4959-b322-d0fa7d8307a9,Namespace:kube-system,Attempt:0,}" May 13 23:43:28.401792 containerd[1805]: time="2025-05-13T23:43:28.401771022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nz4c5,Uid:60effb1d-d7fc-4f4c-a43f-dd169b992307,Namespace:kube-system,Attempt:0,}" May 13 23:43:28.473395 containerd[1805]: time="2025-05-13T23:43:28.473286309Z" level=error msg="Failed to destroy network for sandbox \"27bee8e6b31d89cd3a569e6360a81728f787235ab7bec445ac0e94ba4a87e5d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:28.478146 containerd[1805]: time="2025-05-13T23:43:28.477964758Z" level=error msg="Failed to destroy network for sandbox \"fa128ba9f9e866458eae6b454bc9b992574415506611e91c19ee8bb2f61a8ae8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:28.479819 containerd[1805]: time="2025-05-13T23:43:28.479495160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7gc9s,Uid:6b164de3-9bee-4959-b322-d0fa7d8307a9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bee8e6b31d89cd3a569e6360a81728f787235ab7bec445ac0e94ba4a87e5d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:28.479903 kubelet[3240]: E0513 23:43:28.479854 3240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bee8e6b31d89cd3a569e6360a81728f787235ab7bec445ac0e94ba4a87e5d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:28.479953 kubelet[3240]: E0513 23:43:28.479910 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bee8e6b31d89cd3a569e6360a81728f787235ab7bec445ac0e94ba4a87e5d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7gc9s" May 13 23:43:28.479953 kubelet[3240]: E0513 23:43:28.479932 3240 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bee8e6b31d89cd3a569e6360a81728f787235ab7bec445ac0e94ba4a87e5d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7gc9s" May 13 23:43:28.480019 kubelet[3240]: E0513 23:43:28.479975 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7gc9s_kube-system(6b164de3-9bee-4959-b322-d0fa7d8307a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7gc9s_kube-system(6b164de3-9bee-4959-b322-d0fa7d8307a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27bee8e6b31d89cd3a569e6360a81728f787235ab7bec445ac0e94ba4a87e5d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7gc9s" podUID="6b164de3-9bee-4959-b322-d0fa7d8307a9" May 13 23:43:28.487205 containerd[1805]: time="2025-05-13T23:43:28.486803813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nz4c5,Uid:60effb1d-d7fc-4f4c-a43f-dd169b992307,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa128ba9f9e866458eae6b454bc9b992574415506611e91c19ee8bb2f61a8ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:28.487336 kubelet[3240]: E0513 23:43:28.487024 3240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa128ba9f9e866458eae6b454bc9b992574415506611e91c19ee8bb2f61a8ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:28.487336 kubelet[3240]: E0513 23:43:28.487087 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa128ba9f9e866458eae6b454bc9b992574415506611e91c19ee8bb2f61a8ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nz4c5" May 13 23:43:28.487336 kubelet[3240]: E0513 23:43:28.487105 3240 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa128ba9f9e866458eae6b454bc9b992574415506611e91c19ee8bb2f61a8ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nz4c5" May 13 23:43:28.487427 kubelet[3240]: E0513 23:43:28.487146 3240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nz4c5_kube-system(60effb1d-d7fc-4f4c-a43f-dd169b992307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nz4c5_kube-system(60effb1d-d7fc-4f4c-a43f-dd169b992307)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa128ba9f9e866458eae6b454bc9b992574415506611e91c19ee8bb2f61a8ae8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nz4c5" podUID="60effb1d-d7fc-4f4c-a43f-dd169b992307" May 13 23:43:28.656233 systemd[1]: run-netns-cni\x2d5c6bb3ca\x2d2f41\x2d1981\x2def3d\x2d83f27c3b1bd2.mount: Deactivated successfully. May 13 23:43:28.656323 systemd[1]: run-netns-cni\x2de2717b0f\x2d916d\x2d2b2d\x2d1fc1\x2d82e2e6fa37bd.mount: Deactivated successfully. May 13 23:43:28.656372 systemd[1]: run-netns-cni\x2d680f3603\x2d34bb\x2ddfd0\x2da00b\x2df79202b3c5d7.mount: Deactivated successfully. May 13 23:43:28.656419 systemd[1]: run-netns-cni\x2d02283e05\x2da415\x2d7729\x2d4413\x2d2206b3c47f4a.mount: Deactivated successfully. May 13 23:43:28.656465 systemd[1]: run-netns-cni\x2d080b70c0\x2d5055\x2dc4b2\x2d1b4b\x2d7361dd859f97.mount: Deactivated successfully. May 13 23:43:32.373237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944604959.mount: Deactivated successfully. May 13 23:43:32.459326 containerd[1805]: time="2025-05-13T23:43:32.458710342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:32.463119 containerd[1805]: time="2025-05-13T23:43:32.463032230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 23:43:32.469901 containerd[1805]: time="2025-05-13T23:43:32.469856162Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:32.476754 containerd[1805]: time="2025-05-13T23:43:32.476709294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:32.477567 containerd[1805]: time="2025-05-13T23:43:32.477178775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.2034231s" May 13 23:43:32.477567 containerd[1805]: time="2025-05-13T23:43:32.477215855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 23:43:32.488950 containerd[1805]: time="2025-05-13T23:43:32.488907916Z" level=info msg="CreateContainer within sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:43:32.524408 containerd[1805]: time="2025-05-13T23:43:32.524163739Z" level=info msg="Container f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:32.555648 containerd[1805]: time="2025-05-13T23:43:32.555579474Z" level=info msg="CreateContainer within sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\"" May 13 23:43:32.557680 containerd[1805]: time="2025-05-13T23:43:32.557643478Z" level=info msg="StartContainer for \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\"" May 13 23:43:32.559336 containerd[1805]: time="2025-05-13T23:43:32.559280841Z" level=info msg="connecting to shim f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c" address="unix:///run/containerd/s/051c342b4aca59dc1c41be23c79acb32f17565b7b5effda23ef982aff1edc278" protocol=ttrpc version=3 May 13 23:43:32.581747 systemd[1]: Started cri-containerd-f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c.scope - libcontainer container f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c. May 13 23:43:32.626957 containerd[1805]: time="2025-05-13T23:43:32.626696680Z" level=info msg="StartContainer for \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" returns successfully" May 13 23:43:32.866294 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:43:32.866429 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:43:34.550626 kernel: bpftool[4351]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:43:34.736069 systemd-networkd[1346]: vxlan.calico: Link UP May 13 23:43:34.736077 systemd-networkd[1346]: vxlan.calico: Gained carrier May 13 23:43:36.725784 systemd-networkd[1346]: vxlan.calico: Gained IPv6LL May 13 23:43:39.160508 containerd[1805]: time="2025-05-13T23:43:39.160215407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fc98d758-tglxs,Uid:5d642111-6cf3-4ceb-ae1d-8edd9a855efc,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:39.160508 containerd[1805]: time="2025-05-13T23:43:39.160212887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-nzwmm,Uid:a1d416f3-ba05-4865-b5c3-91cc08110d19,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:39.321578 systemd-networkd[1346]: cali89568cc0cf7: Link UP May 13 23:43:39.323834 systemd-networkd[1346]: cali89568cc0cf7: Gained carrier May 13 23:43:39.341829 kubelet[3240]: I0513 23:43:39.341754 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c6ls4" podStartSLOduration=8.296156106 podStartE2EDuration="23.34163692s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="2025-05-13 23:43:17.432370642 +0000 UTC m=+15.386853238" lastFinishedPulling="2025-05-13 23:43:32.477851456 +0000 UTC m=+30.432334052" observedRunningTime="2025-05-13 23:43:33.307124048 +0000 UTC m=+31.261606644" watchObservedRunningTime="2025-05-13 23:43:39.34163692 +0000 UTC m=+37.296119516" May 13 23:43:39.345517 containerd[1805]: 2025-05-13 23:43:39.235 [INFO][4425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0 calico-apiserver-9fc98d758- calico-apiserver 5d642111-6cf3-4ceb-ae1d-8edd9a855efc 714 0 2025-05-13 23:43:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9fc98d758 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d calico-apiserver-9fc98d758-tglxs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89568cc0cf7 [] []}} ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-" May 13 23:43:39.345517 containerd[1805]: 2025-05-13 23:43:39.235 [INFO][4425] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" May 13 23:43:39.345517 containerd[1805]: 2025-05-13 23:43:39.266 [INFO][4448] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" HandleID="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.280 [INFO][4448] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" HandleID="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"calico-apiserver-9fc98d758-tglxs", "timestamp":"2025-05-13 23:43:39.266361917 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.281 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.281 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.282 [INFO][4448] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.283 [INFO][4448] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.287 [INFO][4448] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.291 [INFO][4448] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.292 [INFO][4448] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345692 containerd[1805]: 2025-05-13 23:43:39.294 [INFO][4448] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345917 containerd[1805]: 2025-05-13 23:43:39.294 [INFO][4448] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345917 containerd[1805]: 2025-05-13 23:43:39.296 [INFO][4448] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373 May 13 23:43:39.345917 containerd[1805]: 2025-05-13 23:43:39.300 [INFO][4448] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345917 containerd[1805]: 2025-05-13 23:43:39.310 [INFO][4448] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.1/26] block=192.168.106.0/26 handle="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345917 containerd[1805]: 2025-05-13 23:43:39.310 [INFO][4448] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.1/26] handle="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.345917 containerd[1805]: 2025-05-13 23:43:39.310 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:39.345917 containerd[1805]: 2025-05-13 23:43:39.310 [INFO][4448] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.1/26] IPv6=[] ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" HandleID="k8s-pod-network.6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" May 13 23:43:39.346048 containerd[1805]: 2025-05-13 23:43:39.313 [INFO][4425] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0", GenerateName:"calico-apiserver-9fc98d758-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d642111-6cf3-4ceb-ae1d-8edd9a855efc", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fc98d758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"calico-apiserver-9fc98d758-tglxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89568cc0cf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:39.346097 containerd[1805]: 2025-05-13 23:43:39.314 [INFO][4425] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.1/32] ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" May 13 23:43:39.346097 containerd[1805]: 2025-05-13 23:43:39.314 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89568cc0cf7 ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" May 13 23:43:39.346097 containerd[1805]: 2025-05-13 23:43:39.323 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" May 13 23:43:39.346156 containerd[1805]: 2025-05-13 23:43:39.325 [INFO][4425] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0", GenerateName:"calico-apiserver-9fc98d758-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d642111-6cf3-4ceb-ae1d-8edd9a855efc", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fc98d758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373", Pod:"calico-apiserver-9fc98d758-tglxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89568cc0cf7", MAC:"3e:d9:0c:30:e6:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:39.346203 containerd[1805]: 2025-05-13 23:43:39.342 [INFO][4425] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-tglxs" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--tglxs-eth0" May 13 23:43:39.415709 systemd-networkd[1346]: cali3b5250e78a7: Link UP May 13 23:43:39.417778 systemd-networkd[1346]: cali3b5250e78a7: Gained carrier May 13 23:43:39.442411 containerd[1805]: 2025-05-13 23:43:39.235 [INFO][4434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0 calico-apiserver-6d667d8584- calico-apiserver a1d416f3-ba05-4865-b5c3-91cc08110d19 721 0 2025-05-13 23:43:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d667d8584 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d calico-apiserver-6d667d8584-nzwmm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3b5250e78a7 [] []}} ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-" May 13 23:43:39.442411 containerd[1805]: 2025-05-13 23:43:39.235 [INFO][4434] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:43:39.442411 containerd[1805]: 2025-05-13 23:43:39.267 [INFO][4450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.285 [INFO][4450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039aae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"calico-apiserver-6d667d8584-nzwmm", "timestamp":"2025-05-13 23:43:39.267466239 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.285 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.310 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.310 [INFO][4450] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.384 [INFO][4450] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.388 [INFO][4450] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.393 [INFO][4450] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.394 [INFO][4450] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443394 containerd[1805]: 2025-05-13 23:43:39.396 [INFO][4450] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443585 containerd[1805]: 2025-05-13 23:43:39.396 [INFO][4450] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443585 containerd[1805]: 2025-05-13 23:43:39.398 [INFO][4450] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786 May 13 23:43:39.443585 containerd[1805]: 2025-05-13 23:43:39.402 [INFO][4450] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443585 containerd[1805]: 2025-05-13 23:43:39.410 [INFO][4450] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.2/26] block=192.168.106.0/26 handle="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443585 containerd[1805]: 2025-05-13 23:43:39.410 [INFO][4450] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.2/26] handle="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:39.443585 containerd[1805]: 2025-05-13 23:43:39.410 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:39.443585 containerd[1805]: 2025-05-13 23:43:39.411 [INFO][4450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.2/26] IPv6=[] ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:43:39.444848 containerd[1805]: 2025-05-13 23:43:39.412 [INFO][4434] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0", GenerateName:"calico-apiserver-6d667d8584-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1d416f3-ba05-4865-b5c3-91cc08110d19", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d667d8584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"calico-apiserver-6d667d8584-nzwmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b5250e78a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:39.444910 containerd[1805]: 2025-05-13 23:43:39.413 [INFO][4434] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.2/32] ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:43:39.444910 containerd[1805]: 2025-05-13 23:43:39.413 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b5250e78a7 ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:43:39.444910 containerd[1805]: 2025-05-13 23:43:39.417 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:43:39.444969 containerd[1805]: 2025-05-13 23:43:39.418 [INFO][4434] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0", GenerateName:"calico-apiserver-6d667d8584-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1d416f3-ba05-4865-b5c3-91cc08110d19", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d667d8584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786", Pod:"calico-apiserver-6d667d8584-nzwmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b5250e78a7", MAC:"4a:28:68:0e:36:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:39.445016 containerd[1805]: 2025-05-13 23:43:39.436 [INFO][4434] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-nzwmm" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:43:39.460778 containerd[1805]: time="2025-05-13T23:43:39.460472698Z" level=info msg="connecting to shim 6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373" address="unix:///run/containerd/s/528ea96a87d14cc94d3571a5a0b97266855c6cbb4f784d45175650a54edfebde" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:39.500766 systemd[1]: Started cri-containerd-6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373.scope - libcontainer container 6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373. May 13 23:43:39.517607 containerd[1805]: time="2025-05-13T23:43:39.516316059Z" level=info msg="connecting to shim 0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" address="unix:///run/containerd/s/c53f0f9c520b8dab493bb5488b3cd66c267f691a5d07482e46fd66b85c0c306d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:39.552763 systemd[1]: Started cri-containerd-0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786.scope - libcontainer container 0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786. May 13 23:43:39.560184 containerd[1805]: time="2025-05-13T23:43:39.559884073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fc98d758-tglxs,Uid:5d642111-6cf3-4ceb-ae1d-8edd9a855efc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373\"" May 13 23:43:39.563558 containerd[1805]: time="2025-05-13T23:43:39.563156880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:43:39.616471 containerd[1805]: time="2025-05-13T23:43:39.616432396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-nzwmm,Uid:a1d416f3-ba05-4865-b5c3-91cc08110d19,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\"" May 13 23:43:40.164307 containerd[1805]: time="2025-05-13T23:43:40.164238744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nz4c5,Uid:60effb1d-d7fc-4f4c-a43f-dd169b992307,Namespace:kube-system,Attempt:0,}" May 13 23:43:40.300030 systemd-networkd[1346]: cali9fbf4624f14: Link UP May 13 23:43:40.300605 systemd-networkd[1346]: cali9fbf4624f14: Gained carrier May 13 23:43:40.324492 containerd[1805]: 2025-05-13 23:43:40.224 [INFO][4589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0 coredns-668d6bf9bc- kube-system 60effb1d-d7fc-4f4c-a43f-dd169b992307 723 0 2025-05-13 23:43:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d coredns-668d6bf9bc-nz4c5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9fbf4624f14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-" May 13 23:43:40.324492 containerd[1805]: 2025-05-13 23:43:40.224 [INFO][4589] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" May 13 23:43:40.324492 containerd[1805]: 2025-05-13 23:43:40.250 [INFO][4603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" HandleID="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Workload="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.261 [INFO][4603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" HandleID="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Workload="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028fc70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"coredns-668d6bf9bc-nz4c5", "timestamp":"2025-05-13 23:43:40.250476571 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.261 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.261 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.261 [INFO][4603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.263 [INFO][4603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.266 [INFO][4603] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.269 [INFO][4603] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.271 [INFO][4603] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325134 containerd[1805]: 2025-05-13 23:43:40.273 [INFO][4603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325391 containerd[1805]: 2025-05-13 23:43:40.273 [INFO][4603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325391 containerd[1805]: 2025-05-13 23:43:40.274 [INFO][4603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f May 13 23:43:40.325391 containerd[1805]: 2025-05-13 23:43:40.282 [INFO][4603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325391 containerd[1805]: 2025-05-13 23:43:40.292 [INFO][4603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.3/26] block=192.168.106.0/26 handle="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325391 containerd[1805]: 2025-05-13 23:43:40.293 [INFO][4603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.3/26] handle="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:40.325391 containerd[1805]: 2025-05-13 23:43:40.293 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:40.325391 containerd[1805]: 2025-05-13 23:43:40.293 [INFO][4603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.3/26] IPv6=[] ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" HandleID="k8s-pod-network.bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Workload="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" May 13 23:43:40.325559 containerd[1805]: 2025-05-13 23:43:40.295 [INFO][4589] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60effb1d-d7fc-4f4c-a43f-dd169b992307", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"coredns-668d6bf9bc-nz4c5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fbf4624f14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:40.325559 containerd[1805]: 2025-05-13 23:43:40.296 [INFO][4589] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.3/32] ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" May 13 23:43:40.325559 containerd[1805]: 2025-05-13 23:43:40.296 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fbf4624f14 ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" May 13 23:43:40.325559 containerd[1805]: 2025-05-13 23:43:40.300 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" May 13 23:43:40.325559 containerd[1805]: 2025-05-13 23:43:40.301 [INFO][4589] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60effb1d-d7fc-4f4c-a43f-dd169b992307", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f", Pod:"coredns-668d6bf9bc-nz4c5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fbf4624f14", MAC:"9a:6d:78:5f:4e:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:40.325559 containerd[1805]: 2025-05-13 23:43:40.320 [INFO][4589] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" Namespace="kube-system" Pod="coredns-668d6bf9bc-nz4c5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--nz4c5-eth0" May 13 23:43:40.431191 containerd[1805]: time="2025-05-13T23:43:40.430731081Z" level=info msg="connecting to shim bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f" address="unix:///run/containerd/s/7298b68299fb69cb998d00f8a9083a19b70c460443b65efa0b55b6a64d153acd" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:40.458743 systemd[1]: Started cri-containerd-bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f.scope - libcontainer container bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f. May 13 23:43:40.497708 containerd[1805]: time="2025-05-13T23:43:40.497582466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nz4c5,Uid:60effb1d-d7fc-4f4c-a43f-dd169b992307,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f\"" May 13 23:43:40.501534 containerd[1805]: time="2025-05-13T23:43:40.501405075Z" level=info msg="CreateContainer within sandbox \"bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:43:40.535058 containerd[1805]: time="2025-05-13T23:43:40.535020307Z" level=info msg="Container 3328e0f240ac2d984cc6a811625cb3c8db86bad6f52d66e16e3a3622f04f6e8f: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:40.564559 containerd[1805]: time="2025-05-13T23:43:40.563887850Z" level=info msg="CreateContainer within sandbox \"bd32554762cc99991aa0a36445eb76c69839e5e1ecade91970c0eda1aee4fb6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3328e0f240ac2d984cc6a811625cb3c8db86bad6f52d66e16e3a3622f04f6e8f\"" May 13 23:43:40.565471 containerd[1805]: time="2025-05-13T23:43:40.565442813Z" level=info msg="StartContainer for \"3328e0f240ac2d984cc6a811625cb3c8db86bad6f52d66e16e3a3622f04f6e8f\"" May 13 23:43:40.568726 containerd[1805]: time="2025-05-13T23:43:40.568684740Z" level=info msg="connecting to shim 3328e0f240ac2d984cc6a811625cb3c8db86bad6f52d66e16e3a3622f04f6e8f" address="unix:///run/containerd/s/7298b68299fb69cb998d00f8a9083a19b70c460443b65efa0b55b6a64d153acd" protocol=ttrpc version=3 May 13 23:43:40.592825 systemd[1]: Started cri-containerd-3328e0f240ac2d984cc6a811625cb3c8db86bad6f52d66e16e3a3622f04f6e8f.scope - libcontainer container 3328e0f240ac2d984cc6a811625cb3c8db86bad6f52d66e16e3a3622f04f6e8f. May 13 23:43:40.625868 containerd[1805]: time="2025-05-13T23:43:40.625818864Z" level=info msg="StartContainer for \"3328e0f240ac2d984cc6a811625cb3c8db86bad6f52d66e16e3a3622f04f6e8f\" returns successfully" May 13 23:43:40.757803 systemd-networkd[1346]: cali89568cc0cf7: Gained IPv6LL May 13 23:43:40.885851 systemd-networkd[1346]: cali3b5250e78a7: Gained IPv6LL May 13 23:43:41.160101 containerd[1805]: time="2025-05-13T23:43:41.159827702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-wmdz5,Uid:879c42d3-a34a-4e3d-babc-43636a4eac7f,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:41.160324 containerd[1805]: time="2025-05-13T23:43:41.160303583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7gc9s,Uid:6b164de3-9bee-4959-b322-d0fa7d8307a9,Namespace:kube-system,Attempt:0,}" May 13 23:43:41.363843 kubelet[3240]: I0513 23:43:41.363736 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nz4c5" podStartSLOduration=32.363720144 podStartE2EDuration="32.363720144s" podCreationTimestamp="2025-05-13 23:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:41.332764237 +0000 UTC m=+39.287246873" watchObservedRunningTime="2025-05-13 23:43:41.363720144 +0000 UTC m=+39.318202740" May 13 23:43:41.413455 systemd-networkd[1346]: cali07080203104: Link UP May 13 23:43:41.414200 systemd-networkd[1346]: cali07080203104: Gained carrier May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.254 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0 calico-apiserver-6d667d8584- calico-apiserver 879c42d3-a34a-4e3d-babc-43636a4eac7f 717 0 2025-05-13 23:43:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d667d8584 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d calico-apiserver-6d667d8584-wmdz5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07080203104 [] []}} ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.254 [INFO][4699] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.293 [INFO][4725] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.313 [INFO][4725] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"calico-apiserver-6d667d8584-wmdz5", "timestamp":"2025-05-13 23:43:41.293426312 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.313 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.314 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.314 [INFO][4725] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.317 [INFO][4725] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.326 [INFO][4725] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.362 [INFO][4725] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.365 [INFO][4725] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.377 [INFO][4725] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.377 [INFO][4725] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.386 [INFO][4725] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9 May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.393 [INFO][4725] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.408 [INFO][4725] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.4/26] block=192.168.106.0/26 handle="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.408 [INFO][4725] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.4/26] handle="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.408 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:41.444454 containerd[1805]: 2025-05-13 23:43:41.408 [INFO][4725] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.4/26] IPv6=[] ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:43:41.445281 containerd[1805]: 2025-05-13 23:43:41.410 [INFO][4699] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0", GenerateName:"calico-apiserver-6d667d8584-", Namespace:"calico-apiserver", SelfLink:"", UID:"879c42d3-a34a-4e3d-babc-43636a4eac7f", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d667d8584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"calico-apiserver-6d667d8584-wmdz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07080203104", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:41.445281 containerd[1805]: 2025-05-13 23:43:41.410 [INFO][4699] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.4/32] ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:43:41.445281 containerd[1805]: 2025-05-13 23:43:41.410 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07080203104 ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:43:41.445281 containerd[1805]: 2025-05-13 23:43:41.424 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:43:41.445281 containerd[1805]: 2025-05-13 23:43:41.426 [INFO][4699] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0", GenerateName:"calico-apiserver-6d667d8584-", Namespace:"calico-apiserver", SelfLink:"", UID:"879c42d3-a34a-4e3d-babc-43636a4eac7f", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d667d8584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9", Pod:"calico-apiserver-6d667d8584-wmdz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07080203104", MAC:"82:10:3c:cd:5f:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:41.445281 containerd[1805]: 2025-05-13 23:43:41.441 [INFO][4699] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Namespace="calico-apiserver" Pod="calico-apiserver-6d667d8584-wmdz5" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:43:41.506623 containerd[1805]: time="2025-05-13T23:43:41.505987493Z" level=info msg="connecting to shim d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" address="unix:///run/containerd/s/d77e5dc55865e09701610af4751ac01691f2f4b7e58d6cc9a0444f1a39cd9cb9" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:41.544739 systemd[1]: Started cri-containerd-d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9.scope - libcontainer container d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9. May 13 23:43:41.556809 systemd-networkd[1346]: cali18ea2b666da: Link UP May 13 23:43:41.559405 systemd-networkd[1346]: cali18ea2b666da: Gained carrier May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.269 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0 coredns-668d6bf9bc- kube-system 6b164de3-9bee-4959-b322-d0fa7d8307a9 722 0 2025-05-13 23:43:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d coredns-668d6bf9bc-7gc9s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18ea2b666da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.269 [INFO][4711] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.299 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" HandleID="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Workload="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.407 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" HandleID="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Workload="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334c50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"coredns-668d6bf9bc-7gc9s", "timestamp":"2025-05-13 23:43:41.299166884 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.407 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.408 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.408 [INFO][4730] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.418 [INFO][4730] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.508 [INFO][4730] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.514 [INFO][4730] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.516 [INFO][4730] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.525 [INFO][4730] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.525 [INFO][4730] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.527 [INFO][4730] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588 May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.535 [INFO][4730] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.546 [INFO][4730] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.5/26] block=192.168.106.0/26 handle="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.546 [INFO][4730] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.5/26] handle="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.546 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:41.584386 containerd[1805]: 2025-05-13 23:43:41.546 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.5/26] IPv6=[] ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" HandleID="k8s-pod-network.a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Workload="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" May 13 23:43:41.585081 containerd[1805]: 2025-05-13 23:43:41.549 [INFO][4711] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6b164de3-9bee-4959-b322-d0fa7d8307a9", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"coredns-668d6bf9bc-7gc9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18ea2b666da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:41.585081 containerd[1805]: 2025-05-13 23:43:41.549 [INFO][4711] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.5/32] ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" May 13 23:43:41.585081 containerd[1805]: 2025-05-13 23:43:41.549 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18ea2b666da ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" May 13 23:43:41.585081 containerd[1805]: 2025-05-13 23:43:41.561 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" May 13 23:43:41.585081 containerd[1805]: 2025-05-13 23:43:41.564 [INFO][4711] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6b164de3-9bee-4959-b322-d0fa7d8307a9", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588", Pod:"coredns-668d6bf9bc-7gc9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18ea2b666da", MAC:"ce:9a:05:1b:01:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:41.585081 containerd[1805]: 2025-05-13 23:43:41.580 [INFO][4711] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" Namespace="kube-system" Pod="coredns-668d6bf9bc-7gc9s" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-coredns--668d6bf9bc--7gc9s-eth0" May 13 23:43:41.623054 containerd[1805]: time="2025-05-13T23:43:41.623021866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d667d8584-wmdz5,Uid:879c42d3-a34a-4e3d-babc-43636a4eac7f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\"" May 13 23:43:41.661191 containerd[1805]: time="2025-05-13T23:43:41.661089149Z" level=info msg="connecting to shim a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588" address="unix:///run/containerd/s/5d9829460bca23871981c30b3122f9886f0774e95cc0230c04aa0dacdb0b83cc" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:41.686808 systemd[1]: Started cri-containerd-a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588.scope - libcontainer container a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588. May 13 23:43:41.722006 containerd[1805]: time="2025-05-13T23:43:41.721959761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7gc9s,Uid:6b164de3-9bee-4959-b322-d0fa7d8307a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588\"" May 13 23:43:41.725501 containerd[1805]: time="2025-05-13T23:43:41.724698807Z" level=info msg="CreateContainer within sandbox \"a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:43:41.760236 containerd[1805]: time="2025-05-13T23:43:41.760208324Z" level=info msg="Container 95658359703c1e95edd4162afddfd43303cc43cb633f65044ba1a4799c88d4fb: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:41.782898 containerd[1805]: time="2025-05-13T23:43:41.782816253Z" level=info msg="CreateContainer within sandbox \"a5f417fe24b39da3058b8d54c2ad9f731c50aee6f76751f6fa24f28157dfe588\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95658359703c1e95edd4162afddfd43303cc43cb633f65044ba1a4799c88d4fb\"" May 13 23:43:41.783525 containerd[1805]: time="2025-05-13T23:43:41.783501494Z" level=info msg="StartContainer for \"95658359703c1e95edd4162afddfd43303cc43cb633f65044ba1a4799c88d4fb\"" May 13 23:43:41.784603 containerd[1805]: time="2025-05-13T23:43:41.784502337Z" level=info msg="connecting to shim 95658359703c1e95edd4162afddfd43303cc43cb633f65044ba1a4799c88d4fb" address="unix:///run/containerd/s/5d9829460bca23871981c30b3122f9886f0774e95cc0230c04aa0dacdb0b83cc" protocol=ttrpc version=3 May 13 23:43:41.812729 systemd[1]: Started cri-containerd-95658359703c1e95edd4162afddfd43303cc43cb633f65044ba1a4799c88d4fb.scope - libcontainer container 95658359703c1e95edd4162afddfd43303cc43cb633f65044ba1a4799c88d4fb. May 13 23:43:41.843712 containerd[1805]: time="2025-05-13T23:43:41.843640985Z" level=info msg="StartContainer for \"95658359703c1e95edd4162afddfd43303cc43cb633f65044ba1a4799c88d4fb\" returns successfully" May 13 23:43:42.342989 kubelet[3240]: I0513 23:43:42.342234 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7gc9s" podStartSLOduration=33.342217269 podStartE2EDuration="33.342217269s" podCreationTimestamp="2025-05-13 23:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:42.341449827 +0000 UTC m=+40.295932423" watchObservedRunningTime="2025-05-13 23:43:42.342217269 +0000 UTC m=+40.296699825" May 13 23:43:42.358712 systemd-networkd[1346]: cali9fbf4624f14: Gained IPv6LL May 13 23:43:42.806053 systemd-networkd[1346]: cali18ea2b666da: Gained IPv6LL May 13 23:43:43.125822 systemd-networkd[1346]: cali07080203104: Gained IPv6LL May 13 23:43:43.160547 containerd[1805]: time="2025-05-13T23:43:43.160247072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7dbd7c4d-d6t84,Uid:9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc,Namespace:calico-system,Attempt:0,}" May 13 23:43:43.161005 containerd[1805]: time="2025-05-13T23:43:43.160758793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b2t,Uid:b076526f-88e8-4f5d-b600-5d379026eaec,Namespace:calico-system,Attempt:0,}" May 13 23:43:43.374554 systemd-networkd[1346]: calie46ef60241f: Link UP May 13 23:43:43.374879 systemd-networkd[1346]: calie46ef60241f: Gained carrier May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.237 [INFO][4897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0 calico-kube-controllers-7b7dbd7c4d- calico-system 9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc 724 0 2025-05-13 23:43:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b7dbd7c4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d calico-kube-controllers-7b7dbd7c4d-d6t84 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie46ef60241f [] []}} ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.238 [INFO][4897] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.293 [INFO][4921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.310 [INFO][4921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cf30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"calico-kube-controllers-7b7dbd7c4d-d6t84", "timestamp":"2025-05-13 23:43:43.292577278 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.311 [INFO][4921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.311 [INFO][4921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.311 [INFO][4921] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.313 [INFO][4921] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.319 [INFO][4921] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.324 [INFO][4921] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.327 [INFO][4921] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.331 [INFO][4921] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.332 [INFO][4921] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.344 [INFO][4921] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35 May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.350 [INFO][4921] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.366 [INFO][4921] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.6/26] block=192.168.106.0/26 handle="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.366 [INFO][4921] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.6/26] handle="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.366 [INFO][4921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:43.398657 containerd[1805]: 2025-05-13 23:43:43.366 [INFO][4921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.6/26] IPv6=[] ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:43.399923 containerd[1805]: 2025-05-13 23:43:43.370 [INFO][4897] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0", GenerateName:"calico-kube-controllers-7b7dbd7c4d-", Namespace:"calico-system", SelfLink:"", UID:"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b7dbd7c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"calico-kube-controllers-7b7dbd7c4d-d6t84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie46ef60241f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:43.399923 containerd[1805]: 2025-05-13 23:43:43.370 [INFO][4897] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.6/32] ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:43.399923 containerd[1805]: 2025-05-13 23:43:43.370 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie46ef60241f ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:43.399923 containerd[1805]: 2025-05-13 23:43:43.375 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:43.399923 containerd[1805]: 2025-05-13 23:43:43.376 [INFO][4897] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0", GenerateName:"calico-kube-controllers-7b7dbd7c4d-", Namespace:"calico-system", SelfLink:"", UID:"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b7dbd7c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35", Pod:"calico-kube-controllers-7b7dbd7c4d-d6t84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie46ef60241f", MAC:"ca:f0:29:92:67:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:43.399923 containerd[1805]: 2025-05-13 23:43:43.394 [INFO][4897] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Namespace="calico-system" Pod="calico-kube-controllers-7b7dbd7c4d-d6t84" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:43.451884 systemd-networkd[1346]: cali18798e44090: Link UP May 13 23:43:43.452759 systemd-networkd[1346]: cali18798e44090: Gained carrier May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.270 [INFO][4908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0 csi-node-driver- calico-system b076526f-88e8-4f5d-b600-5d379026eaec 618 0 2025-05-13 23:43:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d csi-node-driver-q7b2t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18798e44090 [] []}} ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.271 [INFO][4908] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.314 [INFO][4927] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" HandleID="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Workload="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.345 [INFO][4927] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" HandleID="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Workload="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aeaa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"csi-node-driver-q7b2t", "timestamp":"2025-05-13 23:43:43.314670919 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.345 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.368 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.368 [INFO][4927] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.415 [INFO][4927] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.419 [INFO][4927] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.423 [INFO][4927] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.424 [INFO][4927] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.427 [INFO][4927] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.427 [INFO][4927] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.428 [INFO][4927] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50 May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.432 [INFO][4927] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.443 [INFO][4927] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.7/26] block=192.168.106.0/26 handle="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.443 [INFO][4927] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.7/26] handle="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.443 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:43.473755 containerd[1805]: 2025-05-13 23:43:43.443 [INFO][4927] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.7/26] IPv6=[] ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" HandleID="k8s-pod-network.c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Workload="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" May 13 23:43:43.474893 containerd[1805]: 2025-05-13 23:43:43.447 [INFO][4908] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b076526f-88e8-4f5d-b600-5d379026eaec", ResourceVersion:"618", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"csi-node-driver-q7b2t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18798e44090", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:43.474893 containerd[1805]: 2025-05-13 23:43:43.447 [INFO][4908] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.7/32] ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" May 13 23:43:43.474893 containerd[1805]: 2025-05-13 23:43:43.448 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18798e44090 ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" May 13 23:43:43.474893 containerd[1805]: 2025-05-13 23:43:43.453 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" May 13 23:43:43.474893 containerd[1805]: 2025-05-13 23:43:43.453 [INFO][4908] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b076526f-88e8-4f5d-b600-5d379026eaec", ResourceVersion:"618", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50", Pod:"csi-node-driver-q7b2t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18798e44090", MAC:"c2:6d:00:c9:a1:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:43.474893 containerd[1805]: 2025-05-13 23:43:43.470 [INFO][4908] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" Namespace="calico-system" Pod="csi-node-driver-q7b2t" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-csi--node--driver--q7b2t-eth0" May 13 23:43:43.891443 containerd[1805]: time="2025-05-13T23:43:43.891362313Z" level=info msg="connecting to shim c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50" address="unix:///run/containerd/s/768bc2882c1e36cf4a053ece0ad76bdac1cf0cb74111487b2aa836c07cfe5d8f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:43.927743 systemd[1]: Started cri-containerd-c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50.scope - libcontainer container c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50. May 13 23:43:43.943585 containerd[1805]: time="2025-05-13T23:43:43.943112489Z" level=info msg="connecting to shim 3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" address="unix:///run/containerd/s/a2bbf082b5ce9ff33946b0f28551c841b556b93c29f0ff3b1c6b22810052be75" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:43.977288 containerd[1805]: time="2025-05-13T23:43:43.977240073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b2t,Uid:b076526f-88e8-4f5d-b600-5d379026eaec,Namespace:calico-system,Attempt:0,} returns sandbox id \"c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50\"" May 13 23:43:43.992732 systemd[1]: Started cri-containerd-3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35.scope - libcontainer container 3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35. May 13 23:43:44.035597 containerd[1805]: time="2025-05-13T23:43:44.035545141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7dbd7c4d-d6t84,Uid:9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc,Namespace:calico-system,Attempt:0,} returns sandbox id \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\"" May 13 23:43:44.522276 containerd[1805]: time="2025-05-13T23:43:44.522141887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:44.525098 containerd[1805]: time="2025-05-13T23:43:44.524904292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 23:43:44.531014 containerd[1805]: time="2025-05-13T23:43:44.530725223Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:44.540055 containerd[1805]: time="2025-05-13T23:43:44.540015240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:44.540868 containerd[1805]: time="2025-05-13T23:43:44.540836042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 4.977647002s" May 13 23:43:44.540868 containerd[1805]: time="2025-05-13T23:43:44.540867362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:43:44.550467 containerd[1805]: time="2025-05-13T23:43:44.550158779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:43:44.551920 containerd[1805]: time="2025-05-13T23:43:44.551563542Z" level=info msg="CreateContainer within sandbox \"6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:43:44.600818 containerd[1805]: time="2025-05-13T23:43:44.600778673Z" level=info msg="Container 012b896e8b0b017039ba7ae71d1f29ad0d6e17d2bfef530e02f675c8160794d8: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:44.604440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723094177.mount: Deactivated successfully. May 13 23:43:44.657493 containerd[1805]: time="2025-05-13T23:43:44.657446179Z" level=info msg="CreateContainer within sandbox \"6eacaeacb82aed4e09c202b981b9da0b68cc9e65cc366b464f92794674ad6373\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"012b896e8b0b017039ba7ae71d1f29ad0d6e17d2bfef530e02f675c8160794d8\"" May 13 23:43:44.669744 containerd[1805]: time="2025-05-13T23:43:44.668843400Z" level=info msg="StartContainer for \"012b896e8b0b017039ba7ae71d1f29ad0d6e17d2bfef530e02f675c8160794d8\"" May 13 23:43:44.671452 containerd[1805]: time="2025-05-13T23:43:44.671426965Z" level=info msg="connecting to shim 012b896e8b0b017039ba7ae71d1f29ad0d6e17d2bfef530e02f675c8160794d8" address="unix:///run/containerd/s/528ea96a87d14cc94d3571a5a0b97266855c6cbb4f784d45175650a54edfebde" protocol=ttrpc version=3 May 13 23:43:44.692750 systemd[1]: Started cri-containerd-012b896e8b0b017039ba7ae71d1f29ad0d6e17d2bfef530e02f675c8160794d8.scope - libcontainer container 012b896e8b0b017039ba7ae71d1f29ad0d6e17d2bfef530e02f675c8160794d8. May 13 23:43:44.736580 containerd[1805]: time="2025-05-13T23:43:44.736539086Z" level=info msg="StartContainer for \"012b896e8b0b017039ba7ae71d1f29ad0d6e17d2bfef530e02f675c8160794d8\" returns successfully" May 13 23:43:44.917692 systemd-networkd[1346]: calie46ef60241f: Gained IPv6LL May 13 23:43:44.997396 containerd[1805]: time="2025-05-13T23:43:44.997347772Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:45.001652 containerd[1805]: time="2025-05-13T23:43:45.001608780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 23:43:45.002834 containerd[1805]: time="2025-05-13T23:43:45.002807702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 452.611803ms" May 13 23:43:45.002882 containerd[1805]: time="2025-05-13T23:43:45.002838262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:43:45.008545 containerd[1805]: time="2025-05-13T23:43:45.008483592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:43:45.014710 containerd[1805]: time="2025-05-13T23:43:45.013048441Z" level=info msg="CreateContainer within sandbox \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:43:45.044529 containerd[1805]: time="2025-05-13T23:43:45.043683738Z" level=info msg="Container 5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:45.072106 containerd[1805]: time="2025-05-13T23:43:45.072060791Z" level=info msg="CreateContainer within sandbox \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\"" May 13 23:43:45.074750 containerd[1805]: time="2025-05-13T23:43:45.072969592Z" level=info msg="StartContainer for \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\"" May 13 23:43:45.075621 containerd[1805]: time="2025-05-13T23:43:45.075416677Z" level=info msg="connecting to shim 5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2" address="unix:///run/containerd/s/c53f0f9c520b8dab493bb5488b3cd66c267f691a5d07482e46fd66b85c0c306d" protocol=ttrpc version=3 May 13 23:43:45.099772 systemd[1]: Started cri-containerd-5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2.scope - libcontainer container 5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2. May 13 23:43:45.148821 containerd[1805]: time="2025-05-13T23:43:45.148784974Z" level=info msg="StartContainer for \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" returns successfully" May 13 23:43:45.237704 systemd-networkd[1346]: cali18798e44090: Gained IPv6LL May 13 23:43:45.365691 kubelet[3240]: I0513 23:43:45.364495 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:45.430382 kubelet[3240]: I0513 23:43:45.429929 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9fc98d758-tglxs" podStartSLOduration=23.447510646 podStartE2EDuration="28.429910257s" podCreationTimestamp="2025-05-13 23:43:17 +0000 UTC" firstStartedPulling="2025-05-13 23:43:39.562201998 +0000 UTC m=+37.516684594" lastFinishedPulling="2025-05-13 23:43:44.544601529 +0000 UTC m=+42.499084205" observedRunningTime="2025-05-13 23:43:45.402250965 +0000 UTC m=+43.356733561" watchObservedRunningTime="2025-05-13 23:43:45.429910257 +0000 UTC m=+43.384392853" May 13 23:43:45.458925 containerd[1805]: time="2025-05-13T23:43:45.458862111Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:45.464237 containerd[1805]: time="2025-05-13T23:43:45.464177881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 23:43:45.465439 containerd[1805]: time="2025-05-13T23:43:45.465404563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 456.891851ms" May 13 23:43:45.465492 containerd[1805]: time="2025-05-13T23:43:45.465436603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:43:45.466899 containerd[1805]: time="2025-05-13T23:43:45.466789526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:43:45.467764 containerd[1805]: time="2025-05-13T23:43:45.467386247Z" level=info msg="CreateContainer within sandbox \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:43:45.529556 containerd[1805]: time="2025-05-13T23:43:45.528662401Z" level=info msg="Container 79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:45.560646 containerd[1805]: time="2025-05-13T23:43:45.560567140Z" level=info msg="CreateContainer within sandbox \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\"" May 13 23:43:45.561320 containerd[1805]: time="2025-05-13T23:43:45.561273581Z" level=info msg="StartContainer for \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\"" May 13 23:43:45.562297 containerd[1805]: time="2025-05-13T23:43:45.562250183Z" level=info msg="connecting to shim 79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57" address="unix:///run/containerd/s/d77e5dc55865e09701610af4751ac01691f2f4b7e58d6cc9a0444f1a39cd9cb9" protocol=ttrpc version=3 May 13 23:43:45.586234 containerd[1805]: time="2025-05-13T23:43:45.582226300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" id:\"40b6a63bffb786520ab35f1ff169760311140e9d62276ad03647cdc8e422ccb7\" pid:5150 exited_at:{seconds:1747179825 nanos:581775220}" May 13 23:43:45.592757 systemd[1]: Started cri-containerd-79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57.scope - libcontainer container 79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57. May 13 23:43:45.622426 kubelet[3240]: I0513 23:43:45.621604 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d667d8584-nzwmm" podStartSLOduration=24.230859501 podStartE2EDuration="29.621524414s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="2025-05-13 23:43:39.617747119 +0000 UTC m=+37.572229715" lastFinishedPulling="2025-05-13 23:43:45.008411992 +0000 UTC m=+42.962894628" observedRunningTime="2025-05-13 23:43:45.431106979 +0000 UTC m=+43.385589575" watchObservedRunningTime="2025-05-13 23:43:45.621524414 +0000 UTC m=+43.576007010" May 13 23:43:45.782511 containerd[1805]: time="2025-05-13T23:43:45.782304193Z" level=info msg="StartContainer for \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" returns successfully" May 13 23:43:45.798780 containerd[1805]: time="2025-05-13T23:43:45.798742503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" id:\"bfa65aedd64b207b9a0d10ff80e75a818849dc6c6d9630bde8e6bbf069267607\" pid:5196 exited_at:{seconds:1747179825 nanos:791948251}" May 13 23:43:46.374183 kubelet[3240]: I0513 23:43:46.374140 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:46.377890 kubelet[3240]: I0513 23:43:46.377567 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:47.096483 containerd[1805]: time="2025-05-13T23:43:47.096437639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:47.100456 containerd[1805]: time="2025-05-13T23:43:47.100402887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 23:43:47.107433 containerd[1805]: time="2025-05-13T23:43:47.105820897Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:47.115275 containerd[1805]: time="2025-05-13T23:43:47.115239074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:47.116323 containerd[1805]: time="2025-05-13T23:43:47.115998636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.64888315s" May 13 23:43:47.116434 containerd[1805]: time="2025-05-13T23:43:47.116418156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 23:43:47.119812 containerd[1805]: time="2025-05-13T23:43:47.119436522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:43:47.121082 containerd[1805]: time="2025-05-13T23:43:47.121053445Z" level=info msg="CreateContainer within sandbox \"c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:43:47.161627 containerd[1805]: time="2025-05-13T23:43:47.161560000Z" level=info msg="Container 01039a3574f8fa1b0afedb57bcbcc263729b2a8a5093472c7a44d62249d92967: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:47.192145 containerd[1805]: time="2025-05-13T23:43:47.192012177Z" level=info msg="CreateContainer within sandbox \"c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"01039a3574f8fa1b0afedb57bcbcc263729b2a8a5093472c7a44d62249d92967\"" May 13 23:43:47.192910 containerd[1805]: time="2025-05-13T23:43:47.192878979Z" level=info msg="StartContainer for \"01039a3574f8fa1b0afedb57bcbcc263729b2a8a5093472c7a44d62249d92967\"" May 13 23:43:47.194236 containerd[1805]: time="2025-05-13T23:43:47.194205661Z" level=info msg="connecting to shim 01039a3574f8fa1b0afedb57bcbcc263729b2a8a5093472c7a44d62249d92967" address="unix:///run/containerd/s/768bc2882c1e36cf4a053ece0ad76bdac1cf0cb74111487b2aa836c07cfe5d8f" protocol=ttrpc version=3 May 13 23:43:47.234737 systemd[1]: Started cri-containerd-01039a3574f8fa1b0afedb57bcbcc263729b2a8a5093472c7a44d62249d92967.scope - libcontainer container 01039a3574f8fa1b0afedb57bcbcc263729b2a8a5093472c7a44d62249d92967. May 13 23:43:47.350713 containerd[1805]: time="2025-05-13T23:43:47.348664109Z" level=info msg="StartContainer for \"01039a3574f8fa1b0afedb57bcbcc263729b2a8a5093472c7a44d62249d92967\" returns successfully" May 13 23:43:47.377306 kubelet[3240]: I0513 23:43:47.377279 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:48.721493 kubelet[3240]: I0513 23:43:48.721277 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d667d8584-wmdz5" podStartSLOduration=28.879575809 podStartE2EDuration="32.721257784s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="2025-05-13 23:43:41.62457267 +0000 UTC m=+39.579055266" lastFinishedPulling="2025-05-13 23:43:45.466254645 +0000 UTC m=+43.420737241" observedRunningTime="2025-05-13 23:43:46.395918135 +0000 UTC m=+44.350400731" watchObservedRunningTime="2025-05-13 23:43:48.721257784 +0000 UTC m=+46.675740340" May 13 23:43:48.726637 containerd[1805]: time="2025-05-13T23:43:48.726153113Z" level=info msg="StopContainer for \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" with timeout 300 (s)" May 13 23:43:48.739078 containerd[1805]: time="2025-05-13T23:43:48.739037057Z" level=info msg="Stop container \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" with signal terminated" May 13 23:43:48.908632 containerd[1805]: time="2025-05-13T23:43:48.908544973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" id:\"5bb4b4b2022e5a4e1f782c74774347c8a1074f51159119801752ef6394f95ea2\" pid:5280 exited_at:{seconds:1747179828 nanos:908230132}" May 13 23:43:48.912041 containerd[1805]: time="2025-05-13T23:43:48.912009339Z" level=info msg="StopContainer for \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" with timeout 5 (s)" May 13 23:43:48.912417 containerd[1805]: time="2025-05-13T23:43:48.912389180Z" level=info msg="Stop container \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" with signal terminated" May 13 23:43:48.944466 systemd[1]: cri-containerd-f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c.scope: Deactivated successfully. May 13 23:43:48.945750 systemd[1]: cri-containerd-f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c.scope: Consumed 1.276s CPU time, 155.1M memory peak, 595K read from disk, 624K written to disk. May 13 23:43:48.949640 containerd[1805]: time="2025-05-13T23:43:48.949423689Z" level=info msg="received exit event container_id:\"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" id:\"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" pid:4188 exited_at:{seconds:1747179828 nanos:948475527}" May 13 23:43:48.950269 containerd[1805]: time="2025-05-13T23:43:48.949802529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" id:\"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" pid:4188 exited_at:{seconds:1747179828 nanos:948475527}" May 13 23:43:48.977209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c-rootfs.mount: Deactivated successfully. May 13 23:43:49.390851 containerd[1805]: time="2025-05-13T23:43:49.390803270Z" level=info msg="StopContainer for \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" returns successfully" May 13 23:43:49.394039 containerd[1805]: time="2025-05-13T23:43:49.394008476Z" level=info msg="StopPodSandbox for \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\"" May 13 23:43:49.404985 containerd[1805]: time="2025-05-13T23:43:49.404940377Z" level=info msg="Container to stop \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:43:49.404985 containerd[1805]: time="2025-05-13T23:43:49.404981337Z" level=info msg="Container to stop \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:43:49.405093 containerd[1805]: time="2025-05-13T23:43:49.404996417Z" level=info msg="Container to stop \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:43:49.410559 systemd[1]: cri-containerd-4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a.scope: Deactivated successfully. May 13 23:43:49.415127 containerd[1805]: time="2025-05-13T23:43:49.415088196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" id:\"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" pid:3736 exit_status:137 exited_at:{seconds:1747179829 nanos:414751715}" May 13 23:43:49.437682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a-rootfs.mount: Deactivated successfully. May 13 23:43:49.438015 containerd[1805]: time="2025-05-13T23:43:49.437674198Z" level=info msg="shim disconnected" id=4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a namespace=k8s.io May 13 23:43:49.438015 containerd[1805]: time="2025-05-13T23:43:49.437822158Z" level=warning msg="cleaning up after shim disconnected" id=4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a namespace=k8s.io May 13 23:43:49.438015 containerd[1805]: time="2025-05-13T23:43:49.437853518Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:43:49.467582 containerd[1805]: time="2025-05-13T23:43:49.465234729Z" level=info msg="TearDown network for sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" successfully" May 13 23:43:49.467582 containerd[1805]: time="2025-05-13T23:43:49.465276489Z" level=info msg="StopPodSandbox for \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" returns successfully" May 13 23:43:49.468525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a-shm.mount: Deactivated successfully. May 13 23:43:49.474290 containerd[1805]: time="2025-05-13T23:43:49.474246306Z" level=info msg="received exit event sandbox_id:\"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" exit_status:137 exited_at:{seconds:1747179829 nanos:414751715}" May 13 23:43:49.534457 kubelet[3240]: I0513 23:43:49.534408 3240 memory_manager.go:355] "RemoveStaleState removing state" podUID="19b2eae0-3ae2-409c-b6b4-0b92be88d01f" containerName="calico-node" May 13 23:43:49.541109 kubelet[3240]: I0513 23:43:49.539538 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-policysync\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541109 kubelet[3240]: I0513 23:43:49.539569 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-bin-dir\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541109 kubelet[3240]: I0513 23:43:49.539602 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-tigera-ca-bundle\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541109 kubelet[3240]: I0513 23:43:49.539622 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-flexvol-driver-host\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541109 kubelet[3240]: I0513 23:43:49.539638 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-net-dir\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541109 kubelet[3240]: I0513 23:43:49.539643 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-policysync" (OuterVolumeSpecName: "policysync") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.541374 kubelet[3240]: I0513 23:43:49.539654 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-xtables-lock\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541374 kubelet[3240]: I0513 23:43:49.539670 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-run-calico\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541374 kubelet[3240]: I0513 23:43:49.539685 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-lib-calico\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541374 kubelet[3240]: I0513 23:43:49.539702 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-lib-modules\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541374 kubelet[3240]: I0513 23:43:49.539728 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vk4x\" (UniqueName: \"kubernetes.io/projected/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-kube-api-access-6vk4x\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541374 kubelet[3240]: I0513 23:43:49.539749 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-log-dir\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541504 kubelet[3240]: I0513 23:43:49.539766 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-node-certs\") pod \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\" (UID: \"19b2eae0-3ae2-409c-b6b4-0b92be88d01f\") " May 13 23:43:49.541504 kubelet[3240]: I0513 23:43:49.539814 3240 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-policysync\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.543279 systemd[1]: Created slice kubepods-besteffort-pod48e4cfcf_e613_4078_9d18_b370320a6f62.slice - libcontainer container kubepods-besteffort-pod48e4cfcf_e613_4078_9d18_b370320a6f62.slice. May 13 23:43:49.545867 kubelet[3240]: I0513 23:43:49.539684 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.545867 kubelet[3240]: I0513 23:43:49.539700 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.545867 kubelet[3240]: I0513 23:43:49.545656 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.545867 kubelet[3240]: I0513 23:43:49.545684 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.545867 kubelet[3240]: I0513 23:43:49.545698 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.546038 kubelet[3240]: I0513 23:43:49.545710 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.546038 kubelet[3240]: I0513 23:43:49.545773 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.555348 kubelet[3240]: I0513 23:43:49.553991 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:43:49.558479 systemd[1]: var-lib-kubelet-pods-19b2eae0\x2d3ae2\x2d409c\x2db6b4\x2d0b92be88d01f-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 13 23:43:49.562757 systemd[1]: var-lib-kubelet-pods-19b2eae0\x2d3ae2\x2d409c\x2db6b4\x2d0b92be88d01f-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 13 23:43:49.565955 kubelet[3240]: I0513 23:43:49.565913 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-node-certs" (OuterVolumeSpecName: "node-certs") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 23:43:49.567497 systemd[1]: var-lib-kubelet-pods-19b2eae0\x2d3ae2\x2d409c\x2db6b4\x2d0b92be88d01f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6vk4x.mount: Deactivated successfully. May 13 23:43:49.568876 kubelet[3240]: I0513 23:43:49.568838 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-kube-api-access-6vk4x" (OuterVolumeSpecName: "kube-api-access-6vk4x") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "kube-api-access-6vk4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:43:49.569986 kubelet[3240]: I0513 23:43:49.569803 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "19b2eae0-3ae2-409c-b6b4-0b92be88d01f" (UID: "19b2eae0-3ae2-409c-b6b4-0b92be88d01f"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 23:43:49.640097 kubelet[3240]: I0513 23:43:49.640019 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-lib-modules\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640525 kubelet[3240]: I0513 23:43:49.640269 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-xtables-lock\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640525 kubelet[3240]: I0513 23:43:49.640293 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-var-lib-calico\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640525 kubelet[3240]: I0513 23:43:49.640317 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-cni-bin-dir\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640525 kubelet[3240]: I0513 23:43:49.640367 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-flexvol-driver-host\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640525 kubelet[3240]: I0513 23:43:49.640397 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-var-run-calico\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640708 kubelet[3240]: I0513 23:43:49.640424 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-cni-log-dir\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640708 kubelet[3240]: I0513 23:43:49.640444 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/48e4cfcf-e613-4078-9d18-b370320a6f62-node-certs\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640708 kubelet[3240]: I0513 23:43:49.640463 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48e4cfcf-e613-4078-9d18-b370320a6f62-tigera-ca-bundle\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640708 kubelet[3240]: I0513 23:43:49.640478 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms7k\" (UniqueName: \"kubernetes.io/projected/48e4cfcf-e613-4078-9d18-b370320a6f62-kube-api-access-bms7k\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640708 kubelet[3240]: I0513 23:43:49.640561 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-policysync\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640815 kubelet[3240]: I0513 23:43:49.640610 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/48e4cfcf-e613-4078-9d18-b370320a6f62-cni-net-dir\") pod \"calico-node-554zr\" (UID: \"48e4cfcf-e613-4078-9d18-b370320a6f62\") " pod="calico-system/calico-node-554zr" May 13 23:43:49.640815 kubelet[3240]: I0513 23:43:49.640649 3240 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-log-dir\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.640815 kubelet[3240]: I0513 23:43:49.640661 3240 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-xtables-lock\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.640815 kubelet[3240]: I0513 23:43:49.640670 3240 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-lib-calico\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.640815 kubelet[3240]: I0513 23:43:49.640680 3240 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vk4x\" (UniqueName: \"kubernetes.io/projected/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-kube-api-access-6vk4x\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.640815 kubelet[3240]: I0513 23:43:49.640689 3240 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-node-certs\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.640815 kubelet[3240]: I0513 23:43:49.640698 3240 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-bin-dir\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.641004 kubelet[3240]: I0513 23:43:49.640707 3240 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-tigera-ca-bundle\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.641004 kubelet[3240]: I0513 23:43:49.640715 3240 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-flexvol-driver-host\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.641004 kubelet[3240]: I0513 23:43:49.640722 3240 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-cni-net-dir\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.641004 kubelet[3240]: I0513 23:43:49.640732 3240 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-var-run-calico\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.641004 kubelet[3240]: I0513 23:43:49.640739 3240 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19b2eae0-3ae2-409c-b6b4-0b92be88d01f-lib-modules\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:49.877709 containerd[1805]: time="2025-05-13T23:43:49.877117696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-554zr,Uid:48e4cfcf-e613-4078-9d18-b370320a6f62,Namespace:calico-system,Attempt:0,}" May 13 23:43:49.878679 systemd[1]: cri-containerd-9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311.scope: Deactivated successfully. May 13 23:43:49.888275 containerd[1805]: time="2025-05-13T23:43:49.888182796Z" level=info msg="received exit event container_id:\"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" id:\"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" pid:3769 exit_status:1 exited_at:{seconds:1747179829 nanos:887794716}" May 13 23:43:49.888516 containerd[1805]: time="2025-05-13T23:43:49.888397357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" id:\"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" pid:3769 exit_status:1 exited_at:{seconds:1747179829 nanos:887794716}" May 13 23:43:49.981876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311-rootfs.mount: Deactivated successfully. May 13 23:43:49.999287 containerd[1805]: time="2025-05-13T23:43:49.998883924Z" level=info msg="StopContainer for \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" returns successfully" May 13 23:43:50.000661 containerd[1805]: time="2025-05-13T23:43:50.000515687Z" level=info msg="StopPodSandbox for \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\"" May 13 23:43:50.000906 containerd[1805]: time="2025-05-13T23:43:50.000604487Z" level=info msg="Container to stop \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:43:50.001767 containerd[1805]: time="2025-05-13T23:43:50.001730969Z" level=info msg="connecting to shim 69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0" address="unix:///run/containerd/s/43194a6c7cac6bbe939295f296d80358a85e989230d8fa676e6cc28a724c3991" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:50.016365 systemd[1]: cri-containerd-775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956.scope: Deactivated successfully. May 13 23:43:50.018571 containerd[1805]: time="2025-05-13T23:43:50.018470922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" id:\"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" pid:3690 exit_status:137 exited_at:{seconds:1747179830 nanos:18112921}" May 13 23:43:50.043121 systemd[1]: Started cri-containerd-69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0.scope - libcontainer container 69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0. May 13 23:43:50.063521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956-rootfs.mount: Deactivated successfully. May 13 23:43:50.064428 containerd[1805]: time="2025-05-13T23:43:50.064381650Z" level=info msg="shim disconnected" id=775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956 namespace=k8s.io May 13 23:43:50.068654 containerd[1805]: time="2025-05-13T23:43:50.065278172Z" level=warning msg="cleaning up after shim disconnected" id=775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956 namespace=k8s.io May 13 23:43:50.068654 containerd[1805]: time="2025-05-13T23:43:50.065776332Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:43:50.070518 containerd[1805]: time="2025-05-13T23:43:50.064812611Z" level=info msg="received exit event sandbox_id:\"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" exit_status:137 exited_at:{seconds:1747179830 nanos:18112921}" May 13 23:43:50.070518 containerd[1805]: time="2025-05-13T23:43:50.070283821Z" level=info msg="TearDown network for sandbox \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" successfully" May 13 23:43:50.070518 containerd[1805]: time="2025-05-13T23:43:50.070309181Z" level=info msg="StopPodSandbox for \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" returns successfully" May 13 23:43:50.071111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956-shm.mount: Deactivated successfully. May 13 23:43:50.137409 containerd[1805]: time="2025-05-13T23:43:50.136962189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-554zr,Uid:48e4cfcf-e613-4078-9d18-b370320a6f62,Namespace:calico-system,Attempt:0,} returns sandbox id \"69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0\"" May 13 23:43:50.141152 containerd[1805]: time="2025-05-13T23:43:50.141066637Z" level=info msg="CreateContainer within sandbox \"69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:43:50.147549 kubelet[3240]: I0513 23:43:50.147491 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgr5n\" (UniqueName: \"kubernetes.io/projected/cc4041f0-081d-49fe-b42b-e306764e98ed-kube-api-access-kgr5n\") pod \"cc4041f0-081d-49fe-b42b-e306764e98ed\" (UID: \"cc4041f0-081d-49fe-b42b-e306764e98ed\") " May 13 23:43:50.175916 kubelet[3240]: I0513 23:43:50.148804 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4041f0-081d-49fe-b42b-e306764e98ed-tigera-ca-bundle\") pod \"cc4041f0-081d-49fe-b42b-e306764e98ed\" (UID: \"cc4041f0-081d-49fe-b42b-e306764e98ed\") " May 13 23:43:50.175916 kubelet[3240]: I0513 23:43:50.149019 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc4041f0-081d-49fe-b42b-e306764e98ed-typha-certs\") pod \"cc4041f0-081d-49fe-b42b-e306764e98ed\" (UID: \"cc4041f0-081d-49fe-b42b-e306764e98ed\") " May 13 23:43:50.182968 kubelet[3240]: I0513 23:43:50.182202 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4041f0-081d-49fe-b42b-e306764e98ed-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "cc4041f0-081d-49fe-b42b-e306764e98ed" (UID: "cc4041f0-081d-49fe-b42b-e306764e98ed"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 23:43:50.182968 kubelet[3240]: I0513 23:43:50.182559 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc4041f0-081d-49fe-b42b-e306764e98ed-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "cc4041f0-081d-49fe-b42b-e306764e98ed" (UID: "cc4041f0-081d-49fe-b42b-e306764e98ed"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 23:43:50.186558 kubelet[3240]: I0513 23:43:50.186041 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4041f0-081d-49fe-b42b-e306764e98ed-kube-api-access-kgr5n" (OuterVolumeSpecName: "kube-api-access-kgr5n") pod "cc4041f0-081d-49fe-b42b-e306764e98ed" (UID: "cc4041f0-081d-49fe-b42b-e306764e98ed"). InnerVolumeSpecName "kube-api-access-kgr5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:43:50.188854 systemd[1]: Removed slice kubepods-besteffort-pod19b2eae0_3ae2_409c_b6b4_0b92be88d01f.slice - libcontainer container kubepods-besteffort-pod19b2eae0_3ae2_409c_b6b4_0b92be88d01f.slice. May 13 23:43:50.188956 systemd[1]: kubepods-besteffort-pod19b2eae0_3ae2_409c_b6b4_0b92be88d01f.slice: Consumed 1.674s CPU time, 293.5M memory peak, 595K read from disk, 157.2M written to disk. May 13 23:43:50.220377 containerd[1805]: time="2025-05-13T23:43:50.220316269Z" level=info msg="Container 3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:50.252357 kubelet[3240]: I0513 23:43:50.250033 3240 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kgr5n\" (UniqueName: \"kubernetes.io/projected/cc4041f0-081d-49fe-b42b-e306764e98ed-kube-api-access-kgr5n\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:50.252762 kubelet[3240]: I0513 23:43:50.252630 3240 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4041f0-081d-49fe-b42b-e306764e98ed-tigera-ca-bundle\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:50.252762 kubelet[3240]: I0513 23:43:50.252654 3240 reconciler_common.go:299] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc4041f0-081d-49fe-b42b-e306764e98ed-typha-certs\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:50.253305 containerd[1805]: time="2025-05-13T23:43:50.253256693Z" level=info msg="CreateContainer within sandbox \"69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b\"" May 13 23:43:50.254609 containerd[1805]: time="2025-05-13T23:43:50.254568775Z" level=info msg="StartContainer for \"3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b\"" May 13 23:43:50.257627 containerd[1805]: time="2025-05-13T23:43:50.257090060Z" level=info msg="connecting to shim 3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b" address="unix:///run/containerd/s/43194a6c7cac6bbe939295f296d80358a85e989230d8fa676e6cc28a724c3991" protocol=ttrpc version=3 May 13 23:43:50.289879 systemd[1]: Started cri-containerd-3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b.scope - libcontainer container 3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b. May 13 23:43:50.383233 systemd[1]: cri-containerd-3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b.scope: Deactivated successfully. May 13 23:43:50.385682 systemd[1]: cri-containerd-3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b.scope: Consumed 34ms CPU time, 7.8M memory peak, 6.2M written to disk. May 13 23:43:50.390574 containerd[1805]: time="2025-05-13T23:43:50.390161196Z" level=info msg="received exit event container_id:\"3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b\" id:\"3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b\" pid:5471 exited_at:{seconds:1747179830 nanos:389830835}" May 13 23:43:50.392938 containerd[1805]: time="2025-05-13T23:43:50.392837921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b\" id:\"3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b\" pid:5471 exited_at:{seconds:1747179830 nanos:389830835}" May 13 23:43:50.393933 containerd[1805]: time="2025-05-13T23:43:50.393825163Z" level=info msg="StartContainer for \"3594a1b879693e84a6a6f4fcc11358a8a4e518511b39e61c2715e0ea49c8d83b\" returns successfully" May 13 23:43:50.406867 kubelet[3240]: I0513 23:43:50.406458 3240 scope.go:117] "RemoveContainer" containerID="f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c" May 13 23:43:50.422370 containerd[1805]: time="2025-05-13T23:43:50.422337817Z" level=info msg="RemoveContainer for \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\"" May 13 23:43:50.426889 systemd[1]: Removed slice kubepods-besteffort-podcc4041f0_081d_49fe_b42b_e306764e98ed.slice - libcontainer container kubepods-besteffort-podcc4041f0_081d_49fe_b42b_e306764e98ed.slice. May 13 23:43:50.471135 containerd[1805]: time="2025-05-13T23:43:50.471034151Z" level=info msg="RemoveContainer for \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" returns successfully" May 13 23:43:50.493383 kubelet[3240]: I0513 23:43:50.471515 3240 scope.go:117] "RemoveContainer" containerID="d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50" May 13 23:43:50.495087 containerd[1805]: time="2025-05-13T23:43:50.495050597Z" level=info msg="RemoveContainer for \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\"" May 13 23:43:50.516037 containerd[1805]: time="2025-05-13T23:43:50.515572996Z" level=info msg="RemoveContainer for \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" returns successfully" May 13 23:43:50.517764 kubelet[3240]: I0513 23:43:50.516678 3240 scope.go:117] "RemoveContainer" containerID="0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd" May 13 23:43:50.521281 containerd[1805]: time="2025-05-13T23:43:50.521057007Z" level=info msg="RemoveContainer for \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\"" May 13 23:43:50.535953 containerd[1805]: time="2025-05-13T23:43:50.535923515Z" level=info msg="RemoveContainer for \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" returns successfully" May 13 23:43:50.536644 kubelet[3240]: I0513 23:43:50.536615 3240 scope.go:117] "RemoveContainer" containerID="f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c" May 13 23:43:50.537211 containerd[1805]: time="2025-05-13T23:43:50.537171158Z" level=error msg="ContainerStatus for \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\": not found" May 13 23:43:50.537517 kubelet[3240]: E0513 23:43:50.537483 3240 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\": not found" containerID="f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c" May 13 23:43:50.537638 kubelet[3240]: I0513 23:43:50.537518 3240 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c"} err="failed to get container status \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f93e7719293a7fc9fdeb358d3e56ee3160137d5f592771ed7392f603f5d6ec7c\": not found" May 13 23:43:50.537711 kubelet[3240]: I0513 23:43:50.537638 3240 scope.go:117] "RemoveContainer" containerID="d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50" May 13 23:43:50.538158 containerd[1805]: time="2025-05-13T23:43:50.538120920Z" level=error msg="ContainerStatus for \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\": not found" May 13 23:43:50.538294 kubelet[3240]: E0513 23:43:50.538269 3240 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\": not found" containerID="d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50" May 13 23:43:50.538372 kubelet[3240]: I0513 23:43:50.538347 3240 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50"} err="failed to get container status \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\": rpc error: code = NotFound desc = an error occurred when try to find container \"d020c02a46c60740f08c8713128ff0c7606be74826bf77f342f307c8f0b37f50\": not found" May 13 23:43:50.538372 kubelet[3240]: I0513 23:43:50.538372 3240 scope.go:117] "RemoveContainer" containerID="0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd" May 13 23:43:50.538823 containerd[1805]: time="2025-05-13T23:43:50.538552561Z" level=error msg="ContainerStatus for \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\": not found" May 13 23:43:50.538873 kubelet[3240]: E0513 23:43:50.538746 3240 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\": not found" containerID="0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd" May 13 23:43:50.538975 kubelet[3240]: I0513 23:43:50.538943 3240 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd"} err="failed to get container status \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ab1d398874928acec354bb4d1c87f3a7acb0fbbe2f3b2899ffd295cf217fccd\": not found" May 13 23:43:50.538975 kubelet[3240]: I0513 23:43:50.538969 3240 scope.go:117] "RemoveContainer" containerID="9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311" May 13 23:43:50.541611 containerd[1805]: time="2025-05-13T23:43:50.541531926Z" level=info msg="RemoveContainer for \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\"" May 13 23:43:50.549850 kubelet[3240]: I0513 23:43:50.549674 3240 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc4041f0-081d-49fe-b42b-e306764e98ed" containerName="calico-typha" May 13 23:43:50.555906 containerd[1805]: time="2025-05-13T23:43:50.554423831Z" level=info msg="RemoveContainer for \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" returns successfully" May 13 23:43:50.557341 kubelet[3240]: I0513 23:43:50.557325 3240 scope.go:117] "RemoveContainer" containerID="9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311" May 13 23:43:50.558613 containerd[1805]: time="2025-05-13T23:43:50.558564559Z" level=error msg="ContainerStatus for \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\": not found" May 13 23:43:50.559067 kubelet[3240]: E0513 23:43:50.559027 3240 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\": not found" containerID="9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311" May 13 23:43:50.559173 kubelet[3240]: I0513 23:43:50.559068 3240 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311"} err="failed to get container status \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bf3069fbe8d7ab6569ba17ebdadc36c595d24bfc4da70320bf444a59043e311\": not found" May 13 23:43:50.567107 systemd[1]: Created slice kubepods-besteffort-pod180df39d_adb3_4481_990f_7756ee11c714.slice - libcontainer container kubepods-besteffort-pod180df39d_adb3_4481_990f_7756ee11c714.slice. May 13 23:43:50.657309 kubelet[3240]: I0513 23:43:50.656726 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn2fs\" (UniqueName: \"kubernetes.io/projected/180df39d-adb3-4481-990f-7756ee11c714-kube-api-access-qn2fs\") pod \"calico-typha-797c575b-wt75f\" (UID: \"180df39d-adb3-4481-990f-7756ee11c714\") " pod="calico-system/calico-typha-797c575b-wt75f" May 13 23:43:50.657309 kubelet[3240]: I0513 23:43:50.656770 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/180df39d-adb3-4481-990f-7756ee11c714-typha-certs\") pod \"calico-typha-797c575b-wt75f\" (UID: \"180df39d-adb3-4481-990f-7756ee11c714\") " pod="calico-system/calico-typha-797c575b-wt75f" May 13 23:43:50.657309 kubelet[3240]: I0513 23:43:50.656793 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/180df39d-adb3-4481-990f-7756ee11c714-tigera-ca-bundle\") pod \"calico-typha-797c575b-wt75f\" (UID: \"180df39d-adb3-4481-990f-7756ee11c714\") " pod="calico-system/calico-typha-797c575b-wt75f" May 13 23:43:50.778949 containerd[1805]: time="2025-05-13T23:43:50.778768422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:50.782658 containerd[1805]: time="2025-05-13T23:43:50.782607989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 23:43:50.789563 containerd[1805]: time="2025-05-13T23:43:50.789261562Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:50.798995 containerd[1805]: time="2025-05-13T23:43:50.798954981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:50.799305 containerd[1805]: time="2025-05-13T23:43:50.799271861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.679802659s" May 13 23:43:50.799305 containerd[1805]: time="2025-05-13T23:43:50.799302541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 23:43:50.803212 containerd[1805]: time="2025-05-13T23:43:50.802773268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:43:50.808636 containerd[1805]: time="2025-05-13T23:43:50.808301559Z" level=info msg="CreateContainer within sandbox \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:43:50.838546 containerd[1805]: time="2025-05-13T23:43:50.838507537Z" level=info msg="Container f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:50.859202 containerd[1805]: time="2025-05-13T23:43:50.859141976Z" level=info msg="CreateContainer within sandbox \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\"" May 13 23:43:50.860503 containerd[1805]: time="2025-05-13T23:43:50.860472339Z" level=info msg="StartContainer for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\"" May 13 23:43:50.861417 containerd[1805]: time="2025-05-13T23:43:50.861387701Z" level=info msg="connecting to shim f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7" address="unix:///run/containerd/s/a2bbf082b5ce9ff33946b0f28551c841b556b93c29f0ff3b1c6b22810052be75" protocol=ttrpc version=3 May 13 23:43:50.872521 containerd[1805]: time="2025-05-13T23:43:50.872488882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797c575b-wt75f,Uid:180df39d-adb3-4481-990f-7756ee11c714,Namespace:calico-system,Attempt:0,}" May 13 23:43:50.885775 systemd[1]: Started cri-containerd-f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7.scope - libcontainer container f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7. May 13 23:43:50.938989 containerd[1805]: time="2025-05-13T23:43:50.938942610Z" level=info msg="StartContainer for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" returns successfully" May 13 23:43:50.943430 containerd[1805]: time="2025-05-13T23:43:50.943115498Z" level=info msg="connecting to shim 05b53400d5e0fdd673d4b302cf7e4b07d2ab0dce622ea06613dbce78dc3d3c2d" address="unix:///run/containerd/s/6c68abdc2d77bc20d7d70f95d791bceb6cbac2557c80946e48b350cde34a06ad" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:50.987961 systemd[1]: var-lib-kubelet-pods-cc4041f0\x2d081d\x2d49fe\x2db42b\x2de306764e98ed-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 13 23:43:50.988056 systemd[1]: var-lib-kubelet-pods-cc4041f0\x2d081d\x2d49fe\x2db42b\x2de306764e98ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkgr5n.mount: Deactivated successfully. May 13 23:43:50.988111 systemd[1]: var-lib-kubelet-pods-cc4041f0\x2d081d\x2d49fe\x2db42b\x2de306764e98ed-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 13 23:43:50.997770 systemd[1]: Started cri-containerd-05b53400d5e0fdd673d4b302cf7e4b07d2ab0dce622ea06613dbce78dc3d3c2d.scope - libcontainer container 05b53400d5e0fdd673d4b302cf7e4b07d2ab0dce622ea06613dbce78dc3d3c2d. May 13 23:43:51.078707 containerd[1805]: time="2025-05-13T23:43:51.078663118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797c575b-wt75f,Uid:180df39d-adb3-4481-990f-7756ee11c714,Namespace:calico-system,Attempt:0,} returns sandbox id \"05b53400d5e0fdd673d4b302cf7e4b07d2ab0dce622ea06613dbce78dc3d3c2d\"" May 13 23:43:51.095091 containerd[1805]: time="2025-05-13T23:43:51.094866349Z" level=info msg="CreateContainer within sandbox \"05b53400d5e0fdd673d4b302cf7e4b07d2ab0dce622ea06613dbce78dc3d3c2d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:43:51.133626 containerd[1805]: time="2025-05-13T23:43:51.133469583Z" level=info msg="Container 595c1d46a1e44bdb32422f11eddfd7df1ca6d2215b67f97b4c3d0c071c969486: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:51.153853 containerd[1805]: time="2025-05-13T23:43:51.153724222Z" level=info msg="CreateContainer within sandbox \"05b53400d5e0fdd673d4b302cf7e4b07d2ab0dce622ea06613dbce78dc3d3c2d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"595c1d46a1e44bdb32422f11eddfd7df1ca6d2215b67f97b4c3d0c071c969486\"" May 13 23:43:51.155038 containerd[1805]: time="2025-05-13T23:43:51.154387103Z" level=info msg="StartContainer for \"595c1d46a1e44bdb32422f11eddfd7df1ca6d2215b67f97b4c3d0c071c969486\"" May 13 23:43:51.155833 containerd[1805]: time="2025-05-13T23:43:51.155802226Z" level=info msg="connecting to shim 595c1d46a1e44bdb32422f11eddfd7df1ca6d2215b67f97b4c3d0c071c969486" address="unix:///run/containerd/s/6c68abdc2d77bc20d7d70f95d791bceb6cbac2557c80946e48b350cde34a06ad" protocol=ttrpc version=3 May 13 23:43:51.177748 systemd[1]: Started cri-containerd-595c1d46a1e44bdb32422f11eddfd7df1ca6d2215b67f97b4c3d0c071c969486.scope - libcontainer container 595c1d46a1e44bdb32422f11eddfd7df1ca6d2215b67f97b4c3d0c071c969486. May 13 23:43:51.234103 containerd[1805]: time="2025-05-13T23:43:51.233996896Z" level=info msg="StartContainer for \"595c1d46a1e44bdb32422f11eddfd7df1ca6d2215b67f97b4c3d0c071c969486\" returns successfully" May 13 23:43:51.417555 containerd[1805]: time="2025-05-13T23:43:51.417121408Z" level=info msg="StopContainer for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" with timeout 30 (s)" May 13 23:43:51.427406 containerd[1805]: time="2025-05-13T23:43:51.426100025Z" level=info msg="Stop container \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" with signal terminated" May 13 23:43:51.445688 systemd[1]: cri-containerd-f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7.scope: Deactivated successfully. May 13 23:43:51.454503 containerd[1805]: time="2025-05-13T23:43:51.454179079Z" level=info msg="CreateContainer within sandbox \"69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:43:51.462579 containerd[1805]: time="2025-05-13T23:43:51.462542495Z" level=info msg="received exit event container_id:\"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" id:\"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" pid:5528 exit_status:2 exited_at:{seconds:1747179831 nanos:462306215}" May 13 23:43:51.462977 containerd[1805]: time="2025-05-13T23:43:51.462929456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" id:\"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" pid:5528 exit_status:2 exited_at:{seconds:1747179831 nanos:462306215}" May 13 23:43:51.468778 containerd[1805]: time="2025-05-13T23:43:51.467317544Z" level=error msg="ExecSync for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"40a1954bb29aa63003ef2450567083da4b3e2a7a2eea9d811886b2df74b628b8\": OCI runtime exec failed: exec failed: unable to create new parent process: namespace path: lstat /proc/5528/ns/ipc: no such file or directory" May 13 23:43:51.469162 kubelet[3240]: E0513 23:43:51.468981 3240 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"40a1954bb29aa63003ef2450567083da4b3e2a7a2eea9d811886b2df74b628b8\": OCI runtime exec failed: exec failed: unable to create new parent process: namespace path: lstat /proc/5528/ns/ipc: no such file or directory" containerID="f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7" cmd=["/usr/bin/check-status","-r"] May 13 23:43:51.472061 kubelet[3240]: I0513 23:43:51.471325 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b7dbd7c4d-d6t84" podStartSLOduration=27.708464513 podStartE2EDuration="34.471307472s" podCreationTimestamp="2025-05-13 23:43:17 +0000 UTC" firstStartedPulling="2025-05-13 23:43:44.038410466 +0000 UTC m=+41.992893062" lastFinishedPulling="2025-05-13 23:43:50.801253425 +0000 UTC m=+48.755736021" observedRunningTime="2025-05-13 23:43:51.467697305 +0000 UTC m=+49.422179901" watchObservedRunningTime="2025-05-13 23:43:51.471307472 +0000 UTC m=+49.425790028" May 13 23:43:51.489506 kubelet[3240]: I0513 23:43:51.489400 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-797c575b-wt75f" podStartSLOduration=3.489381147 podStartE2EDuration="3.489381147s" podCreationTimestamp="2025-05-13 23:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:51.487510543 +0000 UTC m=+49.441993139" watchObservedRunningTime="2025-05-13 23:43:51.489381147 +0000 UTC m=+49.443863743" May 13 23:43:51.525484 containerd[1805]: time="2025-05-13T23:43:51.525285976Z" level=error msg="ExecSync for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"8b13f452c81aa87cae7639563a4f49a6638a9904adead0b7e1792385787cc4f0\": cannot exec in a deleted state" May 13 23:43:51.525826 kubelet[3240]: E0513 23:43:51.525787 3240 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"8b13f452c81aa87cae7639563a4f49a6638a9904adead0b7e1792385787cc4f0\": cannot exec in a deleted state" containerID="f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7" cmd=["/usr/bin/check-status","-r"] May 13 23:43:51.530739 containerd[1805]: time="2025-05-13T23:43:51.529291943Z" level=error msg="ExecSync for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 13 23:43:51.530839 kubelet[3240]: E0513 23:43:51.529723 3240 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7" cmd=["/usr/bin/check-status","-r"] May 13 23:43:51.532409 containerd[1805]: time="2025-05-13T23:43:51.532046749Z" level=info msg="StopContainer for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" returns successfully" May 13 23:43:51.532916 containerd[1805]: time="2025-05-13T23:43:51.532607710Z" level=info msg="StopPodSandbox for \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\"" May 13 23:43:51.532916 containerd[1805]: time="2025-05-13T23:43:51.532695670Z" level=info msg="Container to stop \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:43:51.539078 containerd[1805]: time="2025-05-13T23:43:51.538930162Z" level=info msg="Container ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:51.547552 systemd[1]: cri-containerd-3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35.scope: Deactivated successfully. May 13 23:43:51.552992 containerd[1805]: time="2025-05-13T23:43:51.552678948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" id:\"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" pid:5046 exit_status:137 exited_at:{seconds:1747179831 nanos:550897545}" May 13 23:43:51.576407 containerd[1805]: time="2025-05-13T23:43:51.576184234Z" level=info msg="CreateContainer within sandbox \"69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529\"" May 13 23:43:51.579943 containerd[1805]: time="2025-05-13T23:43:51.579672880Z" level=info msg="StartContainer for \"ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529\"" May 13 23:43:51.583794 containerd[1805]: time="2025-05-13T23:43:51.583770688Z" level=info msg="connecting to shim ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529" address="unix:///run/containerd/s/43194a6c7cac6bbe939295f296d80358a85e989230d8fa676e6cc28a724c3991" protocol=ttrpc version=3 May 13 23:43:51.587387 containerd[1805]: time="2025-05-13T23:43:51.587239255Z" level=info msg="shim disconnected" id=3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35 namespace=k8s.io May 13 23:43:51.587387 containerd[1805]: time="2025-05-13T23:43:51.587259935Z" level=warning msg="cleaning up after shim disconnected" id=3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35 namespace=k8s.io May 13 23:43:51.587387 containerd[1805]: time="2025-05-13T23:43:51.587285935Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:43:51.609776 systemd[1]: Started cri-containerd-ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529.scope - libcontainer container ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529. May 13 23:43:51.621035 containerd[1805]: time="2025-05-13T23:43:51.620973240Z" level=info msg="received exit event sandbox_id:\"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" exit_status:137 exited_at:{seconds:1747179831 nanos:550897545}" May 13 23:43:51.621678 containerd[1805]: time="2025-05-13T23:43:51.621298000Z" level=error msg="Failed to handle event container_id:\"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" id:\"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" pid:5046 exit_status:137 exited_at:{seconds:1747179831 nanos:550897545} for 3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" May 13 23:43:51.661345 containerd[1805]: time="2025-05-13T23:43:51.661146997Z" level=info msg="StartContainer for \"ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529\" returns successfully" May 13 23:43:51.693905 systemd-networkd[1346]: calie46ef60241f: Link DOWN May 13 23:43:51.693912 systemd-networkd[1346]: calie46ef60241f: Lost carrier May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.691 [INFO][5730] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.692 [INFO][5730] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" iface="eth0" netns="/var/run/netns/cni-1aa4533f-b95b-a049-42de-6d36e7ee788f" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.693 [INFO][5730] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" iface="eth0" netns="/var/run/netns/cni-1aa4533f-b95b-a049-42de-6d36e7ee788f" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.698 [INFO][5730] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" after=6.349252ms iface="eth0" netns="/var/run/netns/cni-1aa4533f-b95b-a049-42de-6d36e7ee788f" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.699 [INFO][5730] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.699 [INFO][5730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.746 [INFO][5746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.746 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.746 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.804 [INFO][5746] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.805 [INFO][5746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.807 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:51.814232 containerd[1805]: 2025-05-13 23:43:51.810 [INFO][5730] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:43:51.815524 containerd[1805]: time="2025-05-13T23:43:51.814989652Z" level=info msg="TearDown network for sandbox \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" successfully" May 13 23:43:51.815524 containerd[1805]: time="2025-05-13T23:43:51.815021132Z" level=info msg="StopPodSandbox for \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" returns successfully" May 13 23:43:51.867643 kubelet[3240]: I0513 23:43:51.867495 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-tigera-ca-bundle\") pod \"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc\" (UID: \"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc\") " May 13 23:43:51.868643 kubelet[3240]: I0513 23:43:51.868418 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nccpr\" (UniqueName: \"kubernetes.io/projected/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-kube-api-access-nccpr\") pod \"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc\" (UID: \"9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc\") " May 13 23:43:51.875998 kubelet[3240]: I0513 23:43:51.875878 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-kube-api-access-nccpr" (OuterVolumeSpecName: "kube-api-access-nccpr") pod "9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc" (UID: "9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc"). InnerVolumeSpecName "kube-api-access-nccpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:43:51.876663 kubelet[3240]: I0513 23:43:51.876510 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc" (UID: "9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 23:43:51.969781 kubelet[3240]: I0513 23:43:51.969640 3240 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-tigera-ca-bundle\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:51.969781 kubelet[3240]: I0513 23:43:51.969680 3240 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nccpr\" (UniqueName: \"kubernetes.io/projected/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc-kube-api-access-nccpr\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:43:51.981396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7-rootfs.mount: Deactivated successfully. May 13 23:43:51.982128 systemd[1]: var-lib-kubelet-pods-9f9d7e09\x2d0870\x2d4cc7\x2dbce7\x2da13d0ea53ccc-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 13 23:43:51.982313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35-rootfs.mount: Deactivated successfully. May 13 23:43:51.982522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35-shm.mount: Deactivated successfully. May 13 23:43:51.982785 systemd[1]: run-netns-cni\x2d1aa4533f\x2db95b\x2da049\x2d42de\x2d6d36e7ee788f.mount: Deactivated successfully. May 13 23:43:51.983208 systemd[1]: var-lib-kubelet-pods-9f9d7e09\x2d0870\x2d4cc7\x2dbce7\x2da13d0ea53ccc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnccpr.mount: Deactivated successfully. May 13 23:43:52.166702 kubelet[3240]: I0513 23:43:52.165986 3240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19b2eae0-3ae2-409c-b6b4-0b92be88d01f" path="/var/lib/kubelet/pods/19b2eae0-3ae2-409c-b6b4-0b92be88d01f/volumes" May 13 23:43:52.166702 kubelet[3240]: I0513 23:43:52.166452 3240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4041f0-081d-49fe-b42b-e306764e98ed" path="/var/lib/kubelet/pods/cc4041f0-081d-49fe-b42b-e306764e98ed/volumes" May 13 23:43:52.175201 systemd[1]: Removed slice kubepods-besteffort-pod9f9d7e09_0870_4cc7_bce7_a13d0ea53ccc.slice - libcontainer container kubepods-besteffort-pod9f9d7e09_0870_4cc7_bce7_a13d0ea53ccc.slice. May 13 23:43:52.312095 systemd[1]: cri-containerd-ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529.scope: Deactivated successfully. May 13 23:43:52.313691 systemd[1]: cri-containerd-ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529.scope: Consumed 511ms CPU time, 56.4M memory peak, 29.5M read from disk. May 13 23:43:52.315644 containerd[1805]: time="2025-05-13T23:43:52.315428653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529\" id:\"ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529\" pid:5710 exited_at:{seconds:1747179832 nanos:315120173}" May 13 23:43:52.315644 containerd[1805]: time="2025-05-13T23:43:52.315505574Z" level=info msg="received exit event container_id:\"ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529\" id:\"ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529\" pid:5710 exited_at:{seconds:1747179832 nanos:315120173}" May 13 23:43:52.344047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccbbe08f835e641ea7fbfef2c897088a18c1515038d062ba7ea58c68ad76c529-rootfs.mount: Deactivated successfully. May 13 23:43:52.459616 kubelet[3240]: I0513 23:43:52.459504 3240 scope.go:117] "RemoveContainer" containerID="f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7" May 13 23:43:52.464976 containerd[1805]: time="2025-05-13T23:43:52.462346616Z" level=info msg="RemoveContainer for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\"" May 13 23:43:52.484762 containerd[1805]: time="2025-05-13T23:43:52.483552296Z" level=info msg="RemoveContainer for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" returns successfully" May 13 23:43:52.485063 kubelet[3240]: I0513 23:43:52.485043 3240 scope.go:117] "RemoveContainer" containerID="f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7" May 13 23:43:52.486230 containerd[1805]: time="2025-05-13T23:43:52.486056621Z" level=error msg="ContainerStatus for \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\": not found" May 13 23:43:52.486365 kubelet[3240]: E0513 23:43:52.486174 3240 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\": not found" containerID="f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7" May 13 23:43:52.486365 kubelet[3240]: I0513 23:43:52.486197 3240 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7"} err="failed to get container status \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4c45348c8644dbc97ccccd50424b3131f0fac2a3b673962252697f26be077f7\": not found" May 13 23:43:52.486768 containerd[1805]: time="2025-05-13T23:43:52.486743302Z" level=info msg="CreateContainer within sandbox \"69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:43:52.532354 containerd[1805]: time="2025-05-13T23:43:52.532131350Z" level=info msg="Container 51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:52.584695 containerd[1805]: time="2025-05-13T23:43:52.583213248Z" level=info msg="CreateContainer within sandbox \"69e1e4b23ce5883302df41da3513816c4484d9923279bcad32fbc5782e78b1c0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\"" May 13 23:43:52.587734 containerd[1805]: time="2025-05-13T23:43:52.587687056Z" level=info msg="StartContainer for \"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\"" May 13 23:43:52.591107 containerd[1805]: time="2025-05-13T23:43:52.590531822Z" level=info msg="connecting to shim 51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a" address="unix:///run/containerd/s/43194a6c7cac6bbe939295f296d80358a85e989230d8fa676e6cc28a724c3991" protocol=ttrpc version=3 May 13 23:43:52.614751 systemd[1]: Started cri-containerd-51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a.scope - libcontainer container 51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a. May 13 23:43:52.663424 containerd[1805]: time="2025-05-13T23:43:52.663391802Z" level=info msg="StartContainer for \"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" returns successfully" May 13 23:43:53.057647 containerd[1805]: time="2025-05-13T23:43:53.057452759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:53.064160 containerd[1805]: time="2025-05-13T23:43:53.064080851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 23:43:53.070465 containerd[1805]: time="2025-05-13T23:43:53.070404263Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:53.076543 containerd[1805]: time="2025-05-13T23:43:53.076482195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:53.078583 containerd[1805]: time="2025-05-13T23:43:53.078535719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 2.275726931s" May 13 23:43:53.078676 containerd[1805]: time="2025-05-13T23:43:53.078584439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 23:43:53.081666 containerd[1805]: time="2025-05-13T23:43:53.081629685Z" level=info msg="CreateContainer within sandbox \"c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:43:53.085860 containerd[1805]: time="2025-05-13T23:43:53.085756853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" id:\"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" pid:5046 exit_status:137 exited_at:{seconds:1747179831 nanos:550897545}" May 13 23:43:53.107137 containerd[1805]: time="2025-05-13T23:43:53.107094494Z" level=info msg="Container 2cd5fc02dfc6acf0fb358d2d35e8b17899dcf854a631f014330fea81f9a214df: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:53.135049 containerd[1805]: time="2025-05-13T23:43:53.135001388Z" level=info msg="CreateContainer within sandbox \"c853133735ee162c3e2750673419873dada3cdb4c1245efb7f4bf2e0fb068e50\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2cd5fc02dfc6acf0fb358d2d35e8b17899dcf854a631f014330fea81f9a214df\"" May 13 23:43:53.136189 containerd[1805]: time="2025-05-13T23:43:53.136143070Z" level=info msg="StartContainer for \"2cd5fc02dfc6acf0fb358d2d35e8b17899dcf854a631f014330fea81f9a214df\"" May 13 23:43:53.138527 containerd[1805]: time="2025-05-13T23:43:53.138476434Z" level=info msg="connecting to shim 2cd5fc02dfc6acf0fb358d2d35e8b17899dcf854a631f014330fea81f9a214df" address="unix:///run/containerd/s/768bc2882c1e36cf4a053ece0ad76bdac1cf0cb74111487b2aa836c07cfe5d8f" protocol=ttrpc version=3 May 13 23:43:53.161803 systemd[1]: Started cri-containerd-2cd5fc02dfc6acf0fb358d2d35e8b17899dcf854a631f014330fea81f9a214df.scope - libcontainer container 2cd5fc02dfc6acf0fb358d2d35e8b17899dcf854a631f014330fea81f9a214df. May 13 23:43:53.208127 containerd[1805]: time="2025-05-13T23:43:53.208084928Z" level=info msg="StartContainer for \"2cd5fc02dfc6acf0fb358d2d35e8b17899dcf854a631f014330fea81f9a214df\" returns successfully" May 13 23:43:53.252263 kubelet[3240]: I0513 23:43:53.252222 3240 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:43:53.252263 kubelet[3240]: I0513 23:43:53.252268 3240 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:43:53.537623 kubelet[3240]: I0513 23:43:53.537460 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-554zr" podStartSLOduration=4.53744192 podStartE2EDuration="4.53744192s" podCreationTimestamp="2025-05-13 23:43:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:53.516803881 +0000 UTC m=+51.471286477" watchObservedRunningTime="2025-05-13 23:43:53.53744192 +0000 UTC m=+51.491924516" May 13 23:43:53.549021 kubelet[3240]: I0513 23:43:53.548810 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q7b2t" podStartSLOduration=27.454479469 podStartE2EDuration="36.548792662s" podCreationTimestamp="2025-05-13 23:43:17 +0000 UTC" firstStartedPulling="2025-05-13 23:43:43.984882367 +0000 UTC m=+41.939364963" lastFinishedPulling="2025-05-13 23:43:53.07919556 +0000 UTC m=+51.033678156" observedRunningTime="2025-05-13 23:43:53.538562643 +0000 UTC m=+51.493045239" watchObservedRunningTime="2025-05-13 23:43:53.548792662 +0000 UTC m=+51.503275258" May 13 23:43:53.549152 kubelet[3240]: I0513 23:43:53.549092 3240 memory_manager.go:355] "RemoveStaleState removing state" podUID="9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc" containerName="calico-kube-controllers" May 13 23:43:53.558602 systemd[1]: Created slice kubepods-besteffort-podb2a54b36_5063_4c55_a68d_c923d02f6ad4.slice - libcontainer container kubepods-besteffort-podb2a54b36_5063_4c55_a68d_c923d02f6ad4.slice. May 13 23:43:53.574613 containerd[1805]: time="2025-05-13T23:43:53.574543752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" id:\"77c5ee1604e818eeaf5c8d95812c57db3f07aef5b92186e2d95520205a4299ea\" pid:5873 exit_status:1 exited_at:{seconds:1747179833 nanos:574057311}" May 13 23:43:53.582014 kubelet[3240]: I0513 23:43:53.581984 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thjld\" (UniqueName: \"kubernetes.io/projected/b2a54b36-5063-4c55-a68d-c923d02f6ad4-kube-api-access-thjld\") pod \"calico-kube-controllers-6db4c7d7-xv64n\" (UID: \"b2a54b36-5063-4c55-a68d-c923d02f6ad4\") " pod="calico-system/calico-kube-controllers-6db4c7d7-xv64n" May 13 23:43:53.582813 kubelet[3240]: I0513 23:43:53.582782 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a54b36-5063-4c55-a68d-c923d02f6ad4-tigera-ca-bundle\") pod \"calico-kube-controllers-6db4c7d7-xv64n\" (UID: \"b2a54b36-5063-4c55-a68d-c923d02f6ad4\") " pod="calico-system/calico-kube-controllers-6db4c7d7-xv64n" May 13 23:43:53.875258 containerd[1805]: time="2025-05-13T23:43:53.875169289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db4c7d7-xv64n,Uid:b2a54b36-5063-4c55-a68d-c923d02f6ad4,Namespace:calico-system,Attempt:0,}" May 13 23:43:54.002466 systemd-networkd[1346]: cali3de736307c7: Link UP May 13 23:43:54.002775 systemd-networkd[1346]: cali3de736307c7: Gained carrier May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.926 [INFO][5887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0 calico-kube-controllers-6db4c7d7- calico-system b2a54b36-5063-4c55-a68d-c923d02f6ad4 1005 0 2025-05-13 23:43:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6db4c7d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d calico-kube-controllers-6db4c7d7-xv64n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3de736307c7 [] []}} ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.926 [INFO][5887] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.952 [INFO][5899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" HandleID="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.963 [INFO][5899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" HandleID="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"calico-kube-controllers-6db4c7d7-xv64n", "timestamp":"2025-05-13 23:43:53.952488878 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.963 [INFO][5899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.963 [INFO][5899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.963 [INFO][5899] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.965 [INFO][5899] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.968 [INFO][5899] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.971 [INFO][5899] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.973 [INFO][5899] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.975 [INFO][5899] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.975 [INFO][5899] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.978 [INFO][5899] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4 May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.986 [INFO][5899] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.997 [INFO][5899] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.8/26] block=192.168.106.0/26 handle="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.997 [INFO][5899] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.8/26] handle="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" host="ci-4284.0.0-n-5e434aba7d" May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.997 [INFO][5899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:54.021444 containerd[1805]: 2025-05-13 23:43:53.997 [INFO][5899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.8/26] IPv6=[] ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" HandleID="k8s-pod-network.e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" May 13 23:43:54.022023 containerd[1805]: 2025-05-13 23:43:53.999 [INFO][5887] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0", GenerateName:"calico-kube-controllers-6db4c7d7-", Namespace:"calico-system", SelfLink:"", UID:"b2a54b36-5063-4c55-a68d-c923d02f6ad4", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db4c7d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"calico-kube-controllers-6db4c7d7-xv64n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3de736307c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:54.022023 containerd[1805]: 2025-05-13 23:43:53.999 [INFO][5887] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.8/32] ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" May 13 23:43:54.022023 containerd[1805]: 2025-05-13 23:43:53.999 [INFO][5887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3de736307c7 ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" May 13 23:43:54.022023 containerd[1805]: 2025-05-13 23:43:54.003 [INFO][5887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" May 13 23:43:54.022023 containerd[1805]: 2025-05-13 23:43:54.004 [INFO][5887] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0", GenerateName:"calico-kube-controllers-6db4c7d7-", Namespace:"calico-system", SelfLink:"", UID:"b2a54b36-5063-4c55-a68d-c923d02f6ad4", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db4c7d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4", Pod:"calico-kube-controllers-6db4c7d7-xv64n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3de736307c7", MAC:"82:56:43:7e:d4:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:54.022023 containerd[1805]: 2025-05-13 23:43:54.018 [INFO][5887] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" Namespace="calico-system" Pod="calico-kube-controllers-6db4c7d7-xv64n" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--6db4c7d7--xv64n-eth0" May 13 23:43:54.130294 containerd[1805]: time="2025-05-13T23:43:54.128995257Z" level=info msg="connecting to shim e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4" address="unix:///run/containerd/s/ac9db2df3391d3a971380e427c3e7c99de716d8f35daf48932bb85d59d8961d3" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:54.169389 kubelet[3240]: I0513 23:43:54.169167 3240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc" path="/var/lib/kubelet/pods/9f9d7e09-0870-4cc7-bce7-a13d0ea53ccc/volumes" May 13 23:43:54.216777 systemd[1]: Started cri-containerd-e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4.scope - libcontainer container e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4. May 13 23:43:54.288419 containerd[1805]: time="2025-05-13T23:43:54.288206642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db4c7d7-xv64n,Uid:b2a54b36-5063-4c55-a68d-c923d02f6ad4,Namespace:calico-system,Attempt:0,} returns sandbox id \"e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4\"" May 13 23:43:54.309737 containerd[1805]: time="2025-05-13T23:43:54.309704644Z" level=info msg="CreateContainer within sandbox \"e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:43:54.345270 containerd[1805]: time="2025-05-13T23:43:54.345001632Z" level=info msg="Container da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:54.375761 containerd[1805]: time="2025-05-13T23:43:54.374872889Z" level=info msg="CreateContainer within sandbox \"e102dde21749e99bb650dca3b23c3476578d747971c947d2d2f6c6ee21cc71d4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\"" May 13 23:43:54.376750 containerd[1805]: time="2025-05-13T23:43:54.376232092Z" level=info msg="StartContainer for \"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\"" May 13 23:43:54.378305 containerd[1805]: time="2025-05-13T23:43:54.378233935Z" level=info msg="connecting to shim da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9" address="unix:///run/containerd/s/ac9db2df3391d3a971380e427c3e7c99de716d8f35daf48932bb85d59d8961d3" protocol=ttrpc version=3 May 13 23:43:54.408247 systemd[1]: Started cri-containerd-da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9.scope - libcontainer container da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9. May 13 23:43:54.521558 containerd[1805]: time="2025-05-13T23:43:54.521516651Z" level=info msg="StartContainer for \"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" returns successfully" May 13 23:43:54.635344 containerd[1805]: time="2025-05-13T23:43:54.635300229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" id:\"5090734a15929dc595262307e0cd34f9a2b8dfa93cbe9f41b29fa3dda8d8c617\" pid:6116 exit_status:1 exited_at:{seconds:1747179834 nanos:634850348}" May 13 23:43:55.221692 systemd-networkd[1346]: cali3de736307c7: Gained IPv6LL May 13 23:43:55.549519 containerd[1805]: time="2025-05-13T23:43:55.549481785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"baf03603d463f62e0f08653663a02952d3689ba09c8fc4093fc2db713e937e8d\" pid:6221 exit_status:1 exited_at:{seconds:1747179835 nanos:548756144}" May 13 23:43:56.534403 containerd[1805]: time="2025-05-13T23:43:56.534346197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"d6211c645402fd3f7a78d06b70d1e6819f7cf9b53a172949790a0e3514f458bf\" pid:6246 exited_at:{seconds:1747179836 nanos:534078956}" May 13 23:43:56.558618 kubelet[3240]: I0513 23:43:56.558429 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6db4c7d7-xv64n" podStartSLOduration=4.558408323 podStartE2EDuration="4.558408323s" podCreationTimestamp="2025-05-13 23:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:55.528572785 +0000 UTC m=+53.483055341" watchObservedRunningTime="2025-05-13 23:43:56.558408323 +0000 UTC m=+54.512890879" May 13 23:44:00.874895 kubelet[3240]: I0513 23:44:00.874715 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:44:02.136183 containerd[1805]: time="2025-05-13T23:44:02.136149008Z" level=info msg="StopPodSandbox for \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\"" May 13 23:44:02.136865 containerd[1805]: time="2025-05-13T23:44:02.136708569Z" level=info msg="TearDown network for sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" successfully" May 13 23:44:02.136865 containerd[1805]: time="2025-05-13T23:44:02.136731089Z" level=info msg="StopPodSandbox for \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" returns successfully" May 13 23:44:02.138624 containerd[1805]: time="2025-05-13T23:44:02.137277890Z" level=info msg="RemovePodSandbox for \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\"" May 13 23:44:02.138624 containerd[1805]: time="2025-05-13T23:44:02.137304210Z" level=info msg="Forcibly stopping sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\"" May 13 23:44:02.138624 containerd[1805]: time="2025-05-13T23:44:02.137382930Z" level=info msg="TearDown network for sandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" successfully" May 13 23:44:02.140846 containerd[1805]: time="2025-05-13T23:44:02.140811777Z" level=info msg="Ensure that sandbox 4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a in task-service has been cleanup successfully" May 13 23:44:02.158622 containerd[1805]: time="2025-05-13T23:44:02.158551612Z" level=info msg="RemovePodSandbox \"4685cb5392ba61d14a2e1fea8ebda95dd76c0c031f5bfc3a89b0bc72f483516a\" returns successfully" May 13 23:44:02.159636 containerd[1805]: time="2025-05-13T23:44:02.159500694Z" level=info msg="StopPodSandbox for \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\"" May 13 23:44:02.159868 containerd[1805]: time="2025-05-13T23:44:02.159848495Z" level=info msg="TearDown network for sandbox \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" successfully" May 13 23:44:02.160111 containerd[1805]: time="2025-05-13T23:44:02.159925495Z" level=info msg="StopPodSandbox for \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" returns successfully" May 13 23:44:02.161606 containerd[1805]: time="2025-05-13T23:44:02.160245576Z" level=info msg="RemovePodSandbox for \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\"" May 13 23:44:02.161606 containerd[1805]: time="2025-05-13T23:44:02.160271896Z" level=info msg="Forcibly stopping sandbox \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\"" May 13 23:44:02.161606 containerd[1805]: time="2025-05-13T23:44:02.160343176Z" level=info msg="TearDown network for sandbox \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" successfully" May 13 23:44:02.162271 containerd[1805]: time="2025-05-13T23:44:02.162245860Z" level=info msg="Ensure that sandbox 775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956 in task-service has been cleanup successfully" May 13 23:44:02.176036 containerd[1805]: time="2025-05-13T23:44:02.175998367Z" level=info msg="RemovePodSandbox \"775f1e97ba3e1bde8c74285785e07d1837839c6be5742cc52450d65416f81956\" returns successfully" May 13 23:44:02.177020 containerd[1805]: time="2025-05-13T23:44:02.176999369Z" level=info msg="StopPodSandbox for \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\"" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.232 [WARNING][6279] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.233 [INFO][6279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.233 [INFO][6279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" iface="eth0" netns="" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.233 [INFO][6279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.233 [INFO][6279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.257 [INFO][6286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.257 [INFO][6286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.257 [INFO][6286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.265 [WARNING][6286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.265 [INFO][6286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.266 [INFO][6286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:44:02.269493 containerd[1805]: 2025-05-13 23:44:02.267 [INFO][6279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.270112 containerd[1805]: time="2025-05-13T23:44:02.269545632Z" level=info msg="TearDown network for sandbox \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" successfully" May 13 23:44:02.270112 containerd[1805]: time="2025-05-13T23:44:02.269568352Z" level=info msg="StopPodSandbox for \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" returns successfully" May 13 23:44:02.270641 containerd[1805]: time="2025-05-13T23:44:02.270271194Z" level=info msg="RemovePodSandbox for \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\"" May 13 23:44:02.270641 containerd[1805]: time="2025-05-13T23:44:02.270314154Z" level=info msg="Forcibly stopping sandbox \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\"" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.304 [WARNING][6304] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.304 [INFO][6304] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.304 [INFO][6304] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" iface="eth0" netns="" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.304 [INFO][6304] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.304 [INFO][6304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.323 [INFO][6311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.323 [INFO][6311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.324 [INFO][6311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.334 [WARNING][6311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.334 [INFO][6311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" HandleID="k8s-pod-network.3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--kube--controllers--7b7dbd7c4d--d6t84-eth0" May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.335 [INFO][6311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:44:02.338624 containerd[1805]: 2025-05-13 23:44:02.337 [INFO][6304] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35" May 13 23:44:02.339301 containerd[1805]: time="2025-05-13T23:44:02.339048090Z" level=info msg="TearDown network for sandbox \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" successfully" May 13 23:44:02.340974 containerd[1805]: time="2025-05-13T23:44:02.340853134Z" level=info msg="Ensure that sandbox 3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35 in task-service has been cleanup successfully" May 13 23:44:02.354923 containerd[1805]: time="2025-05-13T23:44:02.354818481Z" level=info msg="RemovePodSandbox \"3bc458229d290dfa6cd8f923135358f6d4f794b6d3f83256dfec2e8d0137ab35\" returns successfully" May 13 23:44:09.623741 containerd[1805]: time="2025-05-13T23:44:09.623704768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"4ef23c6d4387e5518fb3f0210bae697b69ea32a611d51c8c561ea8dacc86c3df\" pid:6340 exited_at:{seconds:1747179849 nanos:623286928}" May 13 23:44:17.172448 kubelet[3240]: I0513 23:44:17.172065 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:44:17.238224 kubelet[3240]: I0513 23:44:17.238176 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:44:17.239965 containerd[1805]: time="2025-05-13T23:44:17.239785877Z" level=info msg="StopContainer for \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" with timeout 30 (s)" May 13 23:44:17.240653 containerd[1805]: time="2025-05-13T23:44:17.240539398Z" level=info msg="Stop container \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" with signal terminated" May 13 23:44:17.263754 systemd[1]: cri-containerd-5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2.scope: Deactivated successfully. May 13 23:44:17.264028 systemd[1]: cri-containerd-5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2.scope: Consumed 1.654s CPU time, 42.2M memory peak. May 13 23:44:17.267744 containerd[1805]: time="2025-05-13T23:44:17.265323206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" id:\"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" pid:5115 exit_status:1 exited_at:{seconds:1747179857 nanos:264548205}" May 13 23:44:17.267744 containerd[1805]: time="2025-05-13T23:44:17.265334567Z" level=info msg="received exit event container_id:\"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" id:\"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" pid:5115 exit_status:1 exited_at:{seconds:1747179857 nanos:264548205}" May 13 23:44:17.299761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2-rootfs.mount: Deactivated successfully. May 13 23:44:17.335870 systemd[1]: Created slice kubepods-besteffort-pod3b77fdd4_177a_4343_b223_c60dc16f54d1.slice - libcontainer container kubepods-besteffort-pod3b77fdd4_177a_4343_b223_c60dc16f54d1.slice. May 13 23:44:17.415793 containerd[1805]: time="2025-05-13T23:44:17.415751460Z" level=info msg="StopContainer for \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" returns successfully" May 13 23:44:17.416497 containerd[1805]: time="2025-05-13T23:44:17.416467981Z" level=info msg="StopPodSandbox for \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\"" May 13 23:44:17.416563 containerd[1805]: time="2025-05-13T23:44:17.416528302Z" level=info msg="Container to stop \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:44:17.420072 kubelet[3240]: I0513 23:44:17.420015 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b77fdd4-177a-4343-b223-c60dc16f54d1-calico-apiserver-certs\") pod \"calico-apiserver-9fc98d758-4k6h7\" (UID: \"3b77fdd4-177a-4343-b223-c60dc16f54d1\") " pod="calico-apiserver/calico-apiserver-9fc98d758-4k6h7" May 13 23:44:17.420072 kubelet[3240]: I0513 23:44:17.420060 3240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzzl9\" (UniqueName: \"kubernetes.io/projected/3b77fdd4-177a-4343-b223-c60dc16f54d1-kube-api-access-qzzl9\") pod \"calico-apiserver-9fc98d758-4k6h7\" (UID: \"3b77fdd4-177a-4343-b223-c60dc16f54d1\") " pod="calico-apiserver/calico-apiserver-9fc98d758-4k6h7" May 13 23:44:17.423714 systemd[1]: cri-containerd-0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786.scope: Deactivated successfully. May 13 23:44:17.427326 containerd[1805]: time="2025-05-13T23:44:17.426650361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" id:\"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" pid:4566 exit_status:137 exited_at:{seconds:1747179857 nanos:426234200}" May 13 23:44:17.455156 containerd[1805]: time="2025-05-13T23:44:17.455104537Z" level=info msg="shim disconnected" id=0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786 namespace=k8s.io May 13 23:44:17.455670 containerd[1805]: time="2025-05-13T23:44:17.455483138Z" level=warning msg="cleaning up after shim disconnected" id=0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786 namespace=k8s.io May 13 23:44:17.455670 containerd[1805]: time="2025-05-13T23:44:17.455519818Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:44:17.458055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786-rootfs.mount: Deactivated successfully. May 13 23:44:17.499621 containerd[1805]: time="2025-05-13T23:44:17.497381579Z" level=info msg="received exit event sandbox_id:\"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" exit_status:137 exited_at:{seconds:1747179857 nanos:426234200}" May 13 23:44:17.500162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786-shm.mount: Deactivated successfully. May 13 23:44:17.549634 kubelet[3240]: I0513 23:44:17.549570 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:44:17.562871 systemd-networkd[1346]: cali3b5250e78a7: Link DOWN May 13 23:44:17.562878 systemd-networkd[1346]: cali3b5250e78a7: Lost carrier May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.561 [INFO][6432] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.562 [INFO][6432] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" iface="eth0" netns="/var/run/netns/cni-0816e11d-1baf-2844-28b5-b143467d4a1d" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.562 [INFO][6432] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" iface="eth0" netns="/var/run/netns/cni-0816e11d-1baf-2844-28b5-b143467d4a1d" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.569 [INFO][6432] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" after=7.665534ms iface="eth0" netns="/var/run/netns/cni-0816e11d-1baf-2844-28b5-b143467d4a1d" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.569 [INFO][6432] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.569 [INFO][6432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.590 [INFO][6445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.590 [INFO][6445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.591 [INFO][6445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.632 [INFO][6445] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.632 [INFO][6445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.633 [INFO][6445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:44:17.636843 containerd[1805]: 2025-05-13 23:44:17.635 [INFO][6432] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:44:17.637346 containerd[1805]: time="2025-05-13T23:44:17.637148492Z" level=info msg="TearDown network for sandbox \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" successfully" May 13 23:44:17.637346 containerd[1805]: time="2025-05-13T23:44:17.637173932Z" level=info msg="StopPodSandbox for \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" returns successfully" May 13 23:44:17.640985 containerd[1805]: time="2025-05-13T23:44:17.640947979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fc98d758-4k6h7,Uid:3b77fdd4-177a-4343-b223-c60dc16f54d1,Namespace:calico-apiserver,Attempt:0,}" May 13 23:44:17.722667 kubelet[3240]: I0513 23:44:17.722201 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssw8l\" (UniqueName: \"kubernetes.io/projected/a1d416f3-ba05-4865-b5c3-91cc08110d19-kube-api-access-ssw8l\") pod \"a1d416f3-ba05-4865-b5c3-91cc08110d19\" (UID: \"a1d416f3-ba05-4865-b5c3-91cc08110d19\") " May 13 23:44:17.722667 kubelet[3240]: I0513 23:44:17.722270 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1d416f3-ba05-4865-b5c3-91cc08110d19-calico-apiserver-certs\") pod \"a1d416f3-ba05-4865-b5c3-91cc08110d19\" (UID: \"a1d416f3-ba05-4865-b5c3-91cc08110d19\") " May 13 23:44:17.725384 kubelet[3240]: I0513 23:44:17.725339 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d416f3-ba05-4865-b5c3-91cc08110d19-kube-api-access-ssw8l" (OuterVolumeSpecName: "kube-api-access-ssw8l") pod "a1d416f3-ba05-4865-b5c3-91cc08110d19" (UID: "a1d416f3-ba05-4865-b5c3-91cc08110d19"). InnerVolumeSpecName "kube-api-access-ssw8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:44:17.725680 kubelet[3240]: I0513 23:44:17.725649 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d416f3-ba05-4865-b5c3-91cc08110d19-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a1d416f3-ba05-4865-b5c3-91cc08110d19" (UID: "a1d416f3-ba05-4865-b5c3-91cc08110d19"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 23:44:17.777536 systemd-networkd[1346]: calidb562c82c9b: Link UP May 13 23:44:17.779336 systemd-networkd[1346]: calidb562c82c9b: Gained carrier May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.688 [INFO][6454] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0 calico-apiserver-9fc98d758- calico-apiserver 3b77fdd4-177a-4343-b223-c60dc16f54d1 1107 0 2025-05-13 23:44:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9fc98d758 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-5e434aba7d calico-apiserver-9fc98d758-4k6h7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb562c82c9b [] []}} ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.688 [INFO][6454] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.712 [INFO][6467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" HandleID="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.735 [INFO][6467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" HandleID="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aab80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-5e434aba7d", "pod":"calico-apiserver-9fc98d758-4k6h7", "timestamp":"2025-05-13 23:44:17.712110598 +0000 UTC"}, Hostname:"ci-4284.0.0-n-5e434aba7d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.735 [INFO][6467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.735 [INFO][6467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.735 [INFO][6467] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-5e434aba7d' May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.736 [INFO][6467] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.740 [INFO][6467] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.743 [INFO][6467] ipam/ipam.go 489: Trying affinity for 192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.745 [INFO][6467] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.747 [INFO][6467] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.747 [INFO][6467] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.748 [INFO][6467] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8 May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.753 [INFO][6467] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.768 [INFO][6467] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.9/26] block=192.168.106.0/26 handle="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.768 [INFO][6467] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.9/26] handle="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" host="ci-4284.0.0-n-5e434aba7d" May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.768 [INFO][6467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:44:17.810988 containerd[1805]: 2025-05-13 23:44:17.768 [INFO][6467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.9/26] IPv6=[] ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" HandleID="k8s-pod-network.e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" May 13 23:44:17.811580 containerd[1805]: 2025-05-13 23:44:17.773 [INFO][6454] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0", GenerateName:"calico-apiserver-9fc98d758-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b77fdd4-177a-4343-b223-c60dc16f54d1", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fc98d758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"", Pod:"calico-apiserver-9fc98d758-4k6h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb562c82c9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:44:17.811580 containerd[1805]: 2025-05-13 23:44:17.773 [INFO][6454] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.9/32] ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" May 13 23:44:17.811580 containerd[1805]: 2025-05-13 23:44:17.773 [INFO][6454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb562c82c9b ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" May 13 23:44:17.811580 containerd[1805]: 2025-05-13 23:44:17.778 [INFO][6454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" May 13 23:44:17.811580 containerd[1805]: 2025-05-13 23:44:17.779 [INFO][6454] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0", GenerateName:"calico-apiserver-9fc98d758-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b77fdd4-177a-4343-b223-c60dc16f54d1", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fc98d758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-5e434aba7d", ContainerID:"e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8", Pod:"calico-apiserver-9fc98d758-4k6h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb562c82c9b", MAC:"ee:86:58:80:9a:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:44:17.811580 containerd[1805]: 2025-05-13 23:44:17.808 [INFO][6454] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" Namespace="calico-apiserver" Pod="calico-apiserver-9fc98d758-4k6h7" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--9fc98d758--4k6h7-eth0" May 13 23:44:17.825616 kubelet[3240]: I0513 23:44:17.823152 3240 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssw8l\" (UniqueName: \"kubernetes.io/projected/a1d416f3-ba05-4865-b5c3-91cc08110d19-kube-api-access-ssw8l\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:44:17.825616 kubelet[3240]: I0513 23:44:17.823186 3240 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1d416f3-ba05-4865-b5c3-91cc08110d19-calico-apiserver-certs\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:44:17.872358 containerd[1805]: time="2025-05-13T23:44:17.872310391Z" level=info msg="connecting to shim e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8" address="unix:///run/containerd/s/e1cea532f9e4e9478e423e2d4c8c6bdb154335eb216d07e119c8786862714bae" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:17.893833 systemd[1]: Started cri-containerd-e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8.scope - libcontainer container e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8. May 13 23:44:17.941749 containerd[1805]: time="2025-05-13T23:44:17.941623286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fc98d758-4k6h7,Uid:3b77fdd4-177a-4343-b223-c60dc16f54d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8\"" May 13 23:44:17.945486 containerd[1805]: time="2025-05-13T23:44:17.945363853Z" level=info msg="CreateContainer within sandbox \"e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:44:17.966810 containerd[1805]: time="2025-05-13T23:44:17.966751895Z" level=info msg="Container c5e9199c8e49359fd0aa0554791fecea08689b4addc773f75b2886ac6a817e93: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:17.991913 containerd[1805]: time="2025-05-13T23:44:17.990938622Z" level=info msg="CreateContainer within sandbox \"e47310f6c3afdf0b55f878158fc1e9604530d66963c29c6986c81b8a3c1e8dc8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c5e9199c8e49359fd0aa0554791fecea08689b4addc773f75b2886ac6a817e93\"" May 13 23:44:17.991913 containerd[1805]: time="2025-05-13T23:44:17.991655904Z" level=info msg="StartContainer for \"c5e9199c8e49359fd0aa0554791fecea08689b4addc773f75b2886ac6a817e93\"" May 13 23:44:17.993392 containerd[1805]: time="2025-05-13T23:44:17.993358467Z" level=info msg="connecting to shim c5e9199c8e49359fd0aa0554791fecea08689b4addc773f75b2886ac6a817e93" address="unix:///run/containerd/s/e1cea532f9e4e9478e423e2d4c8c6bdb154335eb216d07e119c8786862714bae" protocol=ttrpc version=3 May 13 23:44:18.018790 systemd[1]: Started cri-containerd-c5e9199c8e49359fd0aa0554791fecea08689b4addc773f75b2886ac6a817e93.scope - libcontainer container c5e9199c8e49359fd0aa0554791fecea08689b4addc773f75b2886ac6a817e93. May 13 23:44:18.065876 containerd[1805]: time="2025-05-13T23:44:18.065829848Z" level=info msg="StartContainer for \"c5e9199c8e49359fd0aa0554791fecea08689b4addc773f75b2886ac6a817e93\" returns successfully" May 13 23:44:18.167785 systemd[1]: Removed slice kubepods-besteffort-poda1d416f3_ba05_4865_b5c3_91cc08110d19.slice - libcontainer container kubepods-besteffort-poda1d416f3_ba05_4865_b5c3_91cc08110d19.slice. May 13 23:44:18.168030 systemd[1]: kubepods-besteffort-poda1d416f3_ba05_4865_b5c3_91cc08110d19.slice: Consumed 1.675s CPU time, 42.9M memory peak, 807K read from disk. May 13 23:44:18.304985 systemd[1]: run-netns-cni\x2d0816e11d\x2d1baf\x2d2844\x2d28b5\x2db143467d4a1d.mount: Deactivated successfully. May 13 23:44:18.305277 systemd[1]: var-lib-kubelet-pods-a1d416f3\x2dba05\x2d4865\x2db5c3\x2d91cc08110d19-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 23:44:18.306282 systemd[1]: var-lib-kubelet-pods-a1d416f3\x2dba05\x2d4865\x2db5c3\x2d91cc08110d19-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssw8l.mount: Deactivated successfully. May 13 23:44:18.575552 kubelet[3240]: I0513 23:44:18.575020 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9fc98d758-4k6h7" podStartSLOduration=1.575002402 podStartE2EDuration="1.575002402s" podCreationTimestamp="2025-05-13 23:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:18.57394864 +0000 UTC m=+76.528431276" watchObservedRunningTime="2025-05-13 23:44:18.575002402 +0000 UTC m=+76.529484998" May 13 23:44:19.285779 systemd-networkd[1346]: calidb562c82c9b: Gained IPv6LL May 13 23:44:19.642552 containerd[1805]: time="2025-05-13T23:44:19.641877524Z" level=info msg="StopContainer for \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" with timeout 30 (s)" May 13 23:44:19.643218 containerd[1805]: time="2025-05-13T23:44:19.643118406Z" level=info msg="Stop container \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" with signal terminated" May 13 23:44:19.689003 systemd[1]: cri-containerd-79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57.scope: Deactivated successfully. May 13 23:44:19.689909 systemd[1]: cri-containerd-79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57.scope: Consumed 1.321s CPU time, 52.9M memory peak. May 13 23:44:19.693845 containerd[1805]: time="2025-05-13T23:44:19.693806985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" id:\"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" pid:5187 exit_status:1 exited_at:{seconds:1747179859 nanos:692397862}" May 13 23:44:19.694107 containerd[1805]: time="2025-05-13T23:44:19.693979425Z" level=info msg="received exit event container_id:\"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" id:\"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" pid:5187 exit_status:1 exited_at:{seconds:1747179859 nanos:692397862}" May 13 23:44:19.728451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57-rootfs.mount: Deactivated successfully. May 13 23:44:19.824707 containerd[1805]: time="2025-05-13T23:44:19.824574760Z" level=info msg="StopContainer for \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" returns successfully" May 13 23:44:19.825088 containerd[1805]: time="2025-05-13T23:44:19.825058481Z" level=info msg="StopPodSandbox for \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\"" May 13 23:44:19.825137 containerd[1805]: time="2025-05-13T23:44:19.825128241Z" level=info msg="Container to stop \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:44:19.832360 systemd[1]: cri-containerd-d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9.scope: Deactivated successfully. May 13 23:44:19.838793 containerd[1805]: time="2025-05-13T23:44:19.838651708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" id:\"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" pid:4788 exit_status:137 exited_at:{seconds:1747179859 nanos:837879466}" May 13 23:44:19.872423 containerd[1805]: time="2025-05-13T23:44:19.872377733Z" level=info msg="shim disconnected" id=d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9 namespace=k8s.io May 13 23:44:19.872545 containerd[1805]: time="2025-05-13T23:44:19.872411174Z" level=warning msg="cleaning up after shim disconnected" id=d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9 namespace=k8s.io May 13 23:44:19.872545 containerd[1805]: time="2025-05-13T23:44:19.872440694Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:44:19.874695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9-rootfs.mount: Deactivated successfully. May 13 23:44:19.918711 containerd[1805]: time="2025-05-13T23:44:19.918099223Z" level=info msg="received exit event sandbox_id:\"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" exit_status:137 exited_at:{seconds:1747179859 nanos:837879466}" May 13 23:44:19.921743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9-shm.mount: Deactivated successfully. May 13 23:44:19.977929 systemd-networkd[1346]: cali07080203104: Link DOWN May 13 23:44:19.977939 systemd-networkd[1346]: cali07080203104: Lost carrier May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:19.976 [INFO][6640] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:19.976 [INFO][6640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" iface="eth0" netns="/var/run/netns/cni-d7b6c78d-69b5-7a49-bf78-0dcf3da7cae7" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:19.977 [INFO][6640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" iface="eth0" netns="/var/run/netns/cni-d7b6c78d-69b5-7a49-bf78-0dcf3da7cae7" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:19.982 [INFO][6640] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" after=5.811731ms iface="eth0" netns="/var/run/netns/cni-d7b6c78d-69b5-7a49-bf78-0dcf3da7cae7" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:19.982 [INFO][6640] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:19.982 [INFO][6640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:20.011 [INFO][6647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:20.011 [INFO][6647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:20.011 [INFO][6647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:20.066 [INFO][6647] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:20.066 [INFO][6647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:20.067 [INFO][6647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:44:20.071998 containerd[1805]: 2025-05-13 23:44:20.069 [INFO][6640] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:44:20.074356 systemd[1]: run-netns-cni\x2dd7b6c78d\x2d69b5\x2d7a49\x2dbf78\x2d0dcf3da7cae7.mount: Deactivated successfully. May 13 23:44:20.074744 containerd[1805]: time="2025-05-13T23:44:20.074365328Z" level=info msg="TearDown network for sandbox \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" successfully" May 13 23:44:20.074744 containerd[1805]: time="2025-05-13T23:44:20.074392968Z" level=info msg="StopPodSandbox for \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" returns successfully" May 13 23:44:20.139326 kubelet[3240]: I0513 23:44:20.139282 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sxdc\" (UniqueName: \"kubernetes.io/projected/879c42d3-a34a-4e3d-babc-43636a4eac7f-kube-api-access-5sxdc\") pod \"879c42d3-a34a-4e3d-babc-43636a4eac7f\" (UID: \"879c42d3-a34a-4e3d-babc-43636a4eac7f\") " May 13 23:44:20.139731 kubelet[3240]: I0513 23:44:20.139343 3240 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/879c42d3-a34a-4e3d-babc-43636a4eac7f-calico-apiserver-certs\") pod \"879c42d3-a34a-4e3d-babc-43636a4eac7f\" (UID: \"879c42d3-a34a-4e3d-babc-43636a4eac7f\") " May 13 23:44:20.144331 systemd[1]: var-lib-kubelet-pods-879c42d3\x2da34a\x2d4e3d\x2dbabc\x2d43636a4eac7f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 23:44:20.144804 kubelet[3240]: I0513 23:44:20.144765 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879c42d3-a34a-4e3d-babc-43636a4eac7f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "879c42d3-a34a-4e3d-babc-43636a4eac7f" (UID: "879c42d3-a34a-4e3d-babc-43636a4eac7f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 23:44:20.147700 kubelet[3240]: I0513 23:44:20.147578 3240 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879c42d3-a34a-4e3d-babc-43636a4eac7f-kube-api-access-5sxdc" (OuterVolumeSpecName: "kube-api-access-5sxdc") pod "879c42d3-a34a-4e3d-babc-43636a4eac7f" (UID: "879c42d3-a34a-4e3d-babc-43636a4eac7f"). InnerVolumeSpecName "kube-api-access-5sxdc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:44:20.161792 kubelet[3240]: I0513 23:44:20.161762 3240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d416f3-ba05-4865-b5c3-91cc08110d19" path="/var/lib/kubelet/pods/a1d416f3-ba05-4865-b5c3-91cc08110d19/volumes" May 13 23:44:20.166303 systemd[1]: Removed slice kubepods-besteffort-pod879c42d3_a34a_4e3d_babc_43636a4eac7f.slice - libcontainer container kubepods-besteffort-pod879c42d3_a34a_4e3d_babc_43636a4eac7f.slice. May 13 23:44:20.166549 systemd[1]: kubepods-besteffort-pod879c42d3_a34a_4e3d_babc_43636a4eac7f.slice: Consumed 1.340s CPU time, 53.1M memory peak. May 13 23:44:20.240676 kubelet[3240]: I0513 23:44:20.240535 3240 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5sxdc\" (UniqueName: \"kubernetes.io/projected/879c42d3-a34a-4e3d-babc-43636a4eac7f-kube-api-access-5sxdc\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:44:20.240676 kubelet[3240]: I0513 23:44:20.240561 3240 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/879c42d3-a34a-4e3d-babc-43636a4eac7f-calico-apiserver-certs\") on node \"ci-4284.0.0-n-5e434aba7d\" DevicePath \"\"" May 13 23:44:20.563719 kubelet[3240]: I0513 23:44:20.561930 3240 scope.go:117] "RemoveContainer" containerID="79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57" May 13 23:44:20.565613 containerd[1805]: time="2025-05-13T23:44:20.564807245Z" level=info msg="RemoveContainer for \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\"" May 13 23:44:20.581857 containerd[1805]: time="2025-05-13T23:44:20.581824638Z" level=info msg="RemoveContainer for \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" returns successfully" May 13 23:44:20.582296 kubelet[3240]: I0513 23:44:20.582239 3240 scope.go:117] "RemoveContainer" containerID="79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57" May 13 23:44:20.582777 containerd[1805]: time="2025-05-13T23:44:20.582721119Z" level=error msg="ContainerStatus for \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\": not found" May 13 23:44:20.583024 kubelet[3240]: E0513 23:44:20.582964 3240 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\": not found" containerID="79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57" May 13 23:44:20.583024 kubelet[3240]: I0513 23:44:20.582992 3240 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57"} err="failed to get container status \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\": rpc error: code = NotFound desc = an error occurred when try to find container \"79811a13a26dd220970b298af763be7149e85b2ceb7ebf89bbf485123fc1de57\": not found" May 13 23:44:20.725845 systemd[1]: var-lib-kubelet-pods-879c42d3\x2da34a\x2d4e3d\x2dbabc\x2d43636a4eac7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5sxdc.mount: Deactivated successfully. May 13 23:44:22.163425 kubelet[3240]: I0513 23:44:22.163370 3240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879c42d3-a34a-4e3d-babc-43636a4eac7f" path="/var/lib/kubelet/pods/879c42d3-a34a-4e3d-babc-43636a4eac7f/volumes" May 13 23:44:24.564547 containerd[1805]: time="2025-05-13T23:44:24.564277727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" id:\"7d1cacbd564194ac0393ab5740e45818b4b8c4fa6ca20fe26f30eb2d88eff049\" pid:6673 exited_at:{seconds:1747179864 nanos:563954847}" May 13 23:44:26.536895 containerd[1805]: time="2025-05-13T23:44:26.536792096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"01e2dcab8d7376ea7dcb9c325572fe774f6db93d83ea5f38dbe669de1fe10efe\" pid:6700 exited_at:{seconds:1747179866 nanos:536518335}" May 13 23:44:54.558511 containerd[1805]: time="2025-05-13T23:44:54.558451056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" id:\"35a5678b8bf4b94371cace0b103a0963cdb25c9560454c1f8754372e85c1ac2a\" pid:6739 exited_at:{seconds:1747179894 nanos:558158296}" May 13 23:44:56.536227 containerd[1805]: time="2025-05-13T23:44:56.536131401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"ebc9408a63ecdf425869ac595f18494ac7c8c49173c08cefec37490c6c15a656\" pid:6764 exited_at:{seconds:1747179896 nanos:535893800}" May 13 23:45:02.358983 kubelet[3240]: I0513 23:45:02.358274 3240 scope.go:117] "RemoveContainer" containerID="5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2" May 13 23:45:02.360986 containerd[1805]: time="2025-05-13T23:45:02.360840105Z" level=info msg="RemoveContainer for \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\"" May 13 23:45:02.375160 containerd[1805]: time="2025-05-13T23:45:02.375089691Z" level=info msg="RemoveContainer for \"5880fd78e25a102af192b3d371e7f7ecc386a44e4bba4cfd4a6b47e58955d5d2\" returns successfully" May 13 23:45:02.376560 containerd[1805]: time="2025-05-13T23:45:02.376529534Z" level=info msg="StopPodSandbox for \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\"" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.412 [WARNING][6789] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.412 [INFO][6789] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.412 [INFO][6789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" iface="eth0" netns="" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.412 [INFO][6789] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.412 [INFO][6789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.430 [INFO][6797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.430 [INFO][6797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.430 [INFO][6797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.439 [WARNING][6797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.439 [INFO][6797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.440 [INFO][6797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:02.443560 containerd[1805]: 2025-05-13 23:45:02.441 [INFO][6789] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.444020 containerd[1805]: time="2025-05-13T23:45:02.443614816Z" level=info msg="TearDown network for sandbox \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" successfully" May 13 23:45:02.444020 containerd[1805]: time="2025-05-13T23:45:02.443639616Z" level=info msg="StopPodSandbox for \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" returns successfully" May 13 23:45:02.444620 containerd[1805]: time="2025-05-13T23:45:02.444526698Z" level=info msg="RemovePodSandbox for \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\"" May 13 23:45:02.444620 containerd[1805]: time="2025-05-13T23:45:02.444560298Z" level=info msg="Forcibly stopping sandbox \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\"" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.478 [WARNING][6816] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.478 [INFO][6816] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.478 [INFO][6816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" iface="eth0" netns="" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.478 [INFO][6816] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.478 [INFO][6816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.498 [INFO][6824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.498 [INFO][6824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.498 [INFO][6824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.507 [WARNING][6824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.508 [INFO][6824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" HandleID="k8s-pod-network.0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--nzwmm-eth0" May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.509 [INFO][6824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:02.512911 containerd[1805]: 2025-05-13 23:45:02.510 [INFO][6816] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786" May 13 23:45:02.513252 containerd[1805]: time="2025-05-13T23:45:02.512953903Z" level=info msg="TearDown network for sandbox \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" successfully" May 13 23:45:02.514707 containerd[1805]: time="2025-05-13T23:45:02.514677146Z" level=info msg="Ensure that sandbox 0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786 in task-service has been cleanup successfully" May 13 23:45:02.531370 containerd[1805]: time="2025-05-13T23:45:02.531180576Z" level=info msg="RemovePodSandbox \"0702921be4087e7e9005f60086b8d13fd93d4dbc61d76b8a0a13bc27ae0e9786\" returns successfully" May 13 23:45:02.531684 containerd[1805]: time="2025-05-13T23:45:02.531654057Z" level=info msg="StopPodSandbox for \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\"" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.581 [WARNING][6843] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.582 [INFO][6843] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.582 [INFO][6843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" iface="eth0" netns="" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.582 [INFO][6843] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.582 [INFO][6843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.601 [INFO][6850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.601 [INFO][6850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.601 [INFO][6850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.610 [WARNING][6850] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.610 [INFO][6850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.611 [INFO][6850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:02.615525 containerd[1805]: 2025-05-13 23:45:02.613 [INFO][6843] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.615525 containerd[1805]: time="2025-05-13T23:45:02.615366250Z" level=info msg="TearDown network for sandbox \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" successfully" May 13 23:45:02.615525 containerd[1805]: time="2025-05-13T23:45:02.615389370Z" level=info msg="StopPodSandbox for \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" returns successfully" May 13 23:45:02.616906 containerd[1805]: time="2025-05-13T23:45:02.616716772Z" level=info msg="RemovePodSandbox for \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\"" May 13 23:45:02.616906 containerd[1805]: time="2025-05-13T23:45:02.616749012Z" level=info msg="Forcibly stopping sandbox \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\"" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.653 [WARNING][6869] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" WorkloadEndpoint="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.654 [INFO][6869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.654 [INFO][6869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" iface="eth0" netns="" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.654 [INFO][6869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.654 [INFO][6869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.674 [INFO][6876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.675 [INFO][6876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.675 [INFO][6876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.683 [WARNING][6876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.683 [INFO][6876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" HandleID="k8s-pod-network.d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" Workload="ci--4284.0.0--n--5e434aba7d-k8s-calico--apiserver--6d667d8584--wmdz5-eth0" May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.684 [INFO][6876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:02.688364 containerd[1805]: 2025-05-13 23:45:02.686 [INFO][6869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9" May 13 23:45:02.688785 containerd[1805]: time="2025-05-13T23:45:02.688414903Z" level=info msg="TearDown network for sandbox \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" successfully" May 13 23:45:02.689909 containerd[1805]: time="2025-05-13T23:45:02.689872386Z" level=info msg="Ensure that sandbox d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9 in task-service has been cleanup successfully" May 13 23:45:02.704082 containerd[1805]: time="2025-05-13T23:45:02.704005172Z" level=info msg="RemovePodSandbox \"d049e96efe2959e2f373a8ac3b702f0b3c1ea52b1b8e4f649200c23f7fce19a9\" returns successfully" May 13 23:45:09.618002 containerd[1805]: time="2025-05-13T23:45:09.617738958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"b6944bf6d310fbec0c701b0bff78853e16d52e89396622de5dcec9c6bc13c103\" pid:6894 exited_at:{seconds:1747179909 nanos:617505877}" May 13 23:45:11.811733 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:50782.service - OpenSSH per-connection server daemon (10.200.16.10:50782). May 13 23:45:12.308640 sshd[6910]: Accepted publickey for core from 10.200.16.10 port 50782 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:12.310942 sshd-session[6910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:12.316082 systemd-logind[1754]: New session 10 of user core. May 13 23:45:12.320771 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:45:12.749193 sshd[6912]: Connection closed by 10.200.16.10 port 50782 May 13 23:45:12.749925 sshd-session[6910]: pam_unix(sshd:session): session closed for user core May 13 23:45:12.753234 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:50782.service: Deactivated successfully. May 13 23:45:12.755352 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:45:12.757638 systemd-logind[1754]: Session 10 logged out. Waiting for processes to exit. May 13 23:45:12.759181 systemd-logind[1754]: Removed session 10. May 13 23:45:17.845820 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:50788.service - OpenSSH per-connection server daemon (10.200.16.10:50788). May 13 23:45:18.338766 sshd[6931]: Accepted publickey for core from 10.200.16.10 port 50788 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:18.340032 sshd-session[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:18.344543 systemd-logind[1754]: New session 11 of user core. May 13 23:45:18.353734 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:45:18.761815 sshd[6933]: Connection closed by 10.200.16.10 port 50788 May 13 23:45:18.762621 sshd-session[6931]: pam_unix(sshd:session): session closed for user core May 13 23:45:18.766529 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:50788.service: Deactivated successfully. May 13 23:45:18.766644 systemd-logind[1754]: Session 11 logged out. Waiting for processes to exit. May 13 23:45:18.768870 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:45:18.771014 systemd-logind[1754]: Removed session 11. May 13 23:45:23.844352 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:47974.service - OpenSSH per-connection server daemon (10.200.16.10:47974). May 13 23:45:24.308501 sshd[6946]: Accepted publickey for core from 10.200.16.10 port 47974 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:24.309801 sshd-session[6946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:24.314024 systemd-logind[1754]: New session 12 of user core. May 13 23:45:24.319744 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:45:24.556788 containerd[1805]: time="2025-05-13T23:45:24.556752034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" id:\"f0effe990592d20f105a8fc4f50edbd156bb0c18e644794b10746badf1fcdb29\" pid:6962 exited_at:{seconds:1747179924 nanos:556190633}" May 13 23:45:24.712998 sshd[6948]: Connection closed by 10.200.16.10 port 47974 May 13 23:45:24.713770 sshd-session[6946]: pam_unix(sshd:session): session closed for user core May 13 23:45:24.717159 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:47974.service: Deactivated successfully. May 13 23:45:24.720357 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:45:24.721192 systemd-logind[1754]: Session 12 logged out. Waiting for processes to exit. May 13 23:45:24.722208 systemd-logind[1754]: Removed session 12. May 13 23:45:24.798368 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:47978.service - OpenSSH per-connection server daemon (10.200.16.10:47978). May 13 23:45:25.262453 sshd[6985]: Accepted publickey for core from 10.200.16.10 port 47978 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:25.263763 sshd-session[6985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:25.267846 systemd-logind[1754]: New session 13 of user core. May 13 23:45:25.274729 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:45:25.683108 sshd[6988]: Connection closed by 10.200.16.10 port 47978 May 13 23:45:25.683726 sshd-session[6985]: pam_unix(sshd:session): session closed for user core May 13 23:45:25.687577 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:47978.service: Deactivated successfully. May 13 23:45:25.690301 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:45:25.691814 systemd-logind[1754]: Session 13 logged out. Waiting for processes to exit. May 13 23:45:25.692913 systemd-logind[1754]: Removed session 13. May 13 23:45:25.791314 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:47994.service - OpenSSH per-connection server daemon (10.200.16.10:47994). May 13 23:45:26.288326 sshd[6998]: Accepted publickey for core from 10.200.16.10 port 47994 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:26.289992 sshd-session[6998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:26.295542 systemd-logind[1754]: New session 14 of user core. May 13 23:45:26.301760 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:45:26.539684 containerd[1805]: time="2025-05-13T23:45:26.539554824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"b12a721280df8c9fe2b347f1f8ca2d1c8591020965ee4312dabfc3d748fef930\" pid:7013 exited_at:{seconds:1747179926 nanos:537749981}" May 13 23:45:26.713172 sshd[7000]: Connection closed by 10.200.16.10 port 47994 May 13 23:45:26.713074 sshd-session[6998]: pam_unix(sshd:session): session closed for user core May 13 23:45:26.716282 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:47994.service: Deactivated successfully. May 13 23:45:26.718119 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:45:26.719556 systemd-logind[1754]: Session 14 logged out. Waiting for processes to exit. May 13 23:45:26.720453 systemd-logind[1754]: Removed session 14. May 13 23:45:31.828418 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:56054.service - OpenSSH per-connection server daemon (10.200.16.10:56054). May 13 23:45:32.329187 sshd[7037]: Accepted publickey for core from 10.200.16.10 port 56054 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:32.330507 sshd-session[7037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:32.334435 systemd-logind[1754]: New session 15 of user core. May 13 23:45:32.344737 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:45:32.767636 sshd[7043]: Connection closed by 10.200.16.10 port 56054 May 13 23:45:32.768162 sshd-session[7037]: pam_unix(sshd:session): session closed for user core May 13 23:45:32.771699 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:56054.service: Deactivated successfully. May 13 23:45:32.775172 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:45:32.777630 systemd-logind[1754]: Session 15 logged out. Waiting for processes to exit. May 13 23:45:32.779453 systemd-logind[1754]: Removed session 15. May 13 23:45:37.855827 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:56058.service - OpenSSH per-connection server daemon (10.200.16.10:56058). May 13 23:45:38.349540 sshd[7067]: Accepted publickey for core from 10.200.16.10 port 56058 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:38.351140 sshd-session[7067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:38.355790 systemd-logind[1754]: New session 16 of user core. May 13 23:45:38.358764 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:45:38.774275 sshd[7069]: Connection closed by 10.200.16.10 port 56058 May 13 23:45:38.775125 sshd-session[7067]: pam_unix(sshd:session): session closed for user core May 13 23:45:38.780142 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:56058.service: Deactivated successfully. May 13 23:45:38.784327 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:45:38.785368 systemd-logind[1754]: Session 16 logged out. Waiting for processes to exit. May 13 23:45:38.786397 systemd-logind[1754]: Removed session 16. May 13 23:45:43.862150 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:43012.service - OpenSSH per-connection server daemon (10.200.16.10:43012). May 13 23:45:44.359723 sshd[7085]: Accepted publickey for core from 10.200.16.10 port 43012 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:44.361058 sshd-session[7085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:44.366737 systemd-logind[1754]: New session 17 of user core. May 13 23:45:44.369911 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:45:44.781562 sshd[7088]: Connection closed by 10.200.16.10 port 43012 May 13 23:45:44.782086 sshd-session[7085]: pam_unix(sshd:session): session closed for user core May 13 23:45:44.785795 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:43012.service: Deactivated successfully. May 13 23:45:44.787650 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:45:44.788442 systemd-logind[1754]: Session 17 logged out. Waiting for processes to exit. May 13 23:45:44.791317 systemd-logind[1754]: Removed session 17. May 13 23:45:44.868771 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:43014.service - OpenSSH per-connection server daemon (10.200.16.10:43014). May 13 23:45:45.337891 sshd[7100]: Accepted publickey for core from 10.200.16.10 port 43014 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:45.339239 sshd-session[7100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:45.343670 systemd-logind[1754]: New session 18 of user core. May 13 23:45:45.353789 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:45:45.867493 sshd[7102]: Connection closed by 10.200.16.10 port 43014 May 13 23:45:45.866568 sshd-session[7100]: pam_unix(sshd:session): session closed for user core May 13 23:45:45.870678 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:43014.service: Deactivated successfully. May 13 23:45:45.872559 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:45:45.874145 systemd-logind[1754]: Session 18 logged out. Waiting for processes to exit. May 13 23:45:45.875363 systemd-logind[1754]: Removed session 18. May 13 23:45:45.949939 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:43018.service - OpenSSH per-connection server daemon (10.200.16.10:43018). May 13 23:45:46.418724 sshd[7112]: Accepted publickey for core from 10.200.16.10 port 43018 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:46.420003 sshd-session[7112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:46.424512 systemd-logind[1754]: New session 19 of user core. May 13 23:45:46.431723 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:45:47.477626 sshd[7114]: Connection closed by 10.200.16.10 port 43018 May 13 23:45:47.478581 sshd-session[7112]: pam_unix(sshd:session): session closed for user core May 13 23:45:47.482246 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:43018.service: Deactivated successfully. May 13 23:45:47.485518 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:45:47.486434 systemd-logind[1754]: Session 19 logged out. Waiting for processes to exit. May 13 23:45:47.487244 systemd-logind[1754]: Removed session 19. May 13 23:45:47.564330 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:43020.service - OpenSSH per-connection server daemon (10.200.16.10:43020). May 13 23:45:48.061459 sshd[7133]: Accepted publickey for core from 10.200.16.10 port 43020 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:48.064797 sshd-session[7133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:48.069556 systemd-logind[1754]: New session 20 of user core. May 13 23:45:48.071749 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:45:48.595119 sshd[7135]: Connection closed by 10.200.16.10 port 43020 May 13 23:45:48.595988 sshd-session[7133]: pam_unix(sshd:session): session closed for user core May 13 23:45:48.599556 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:43020.service: Deactivated successfully. May 13 23:45:48.601743 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:45:48.603799 systemd-logind[1754]: Session 20 logged out. Waiting for processes to exit. May 13 23:45:48.605174 systemd-logind[1754]: Removed session 20. May 13 23:45:48.696830 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:37850.service - OpenSSH per-connection server daemon (10.200.16.10:37850). May 13 23:45:49.165357 sshd[7145]: Accepted publickey for core from 10.200.16.10 port 37850 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:49.166627 sshd-session[7145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:49.170646 systemd-logind[1754]: New session 21 of user core. May 13 23:45:49.173849 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:45:49.567112 sshd[7147]: Connection closed by 10.200.16.10 port 37850 May 13 23:45:49.567702 sshd-session[7145]: pam_unix(sshd:session): session closed for user core May 13 23:45:49.571169 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:37850.service: Deactivated successfully. May 13 23:45:49.572975 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:45:49.573664 systemd-logind[1754]: Session 21 logged out. Waiting for processes to exit. May 13 23:45:49.574486 systemd-logind[1754]: Removed session 21. May 13 23:45:54.554259 containerd[1805]: time="2025-05-13T23:45:54.554217992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" id:\"a9389ae475768bc614be8ac9237add1650959d71caa2bcd24aa143ec024cd831\" pid:7171 exited_at:{seconds:1747179954 nanos:553698711}" May 13 23:45:54.660282 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:37854.service - OpenSSH per-connection server daemon (10.200.16.10:37854). May 13 23:45:55.120161 sshd[7184]: Accepted publickey for core from 10.200.16.10 port 37854 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:55.121617 sshd-session[7184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:55.126033 systemd-logind[1754]: New session 22 of user core. May 13 23:45:55.128735 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:45:55.527830 sshd[7186]: Connection closed by 10.200.16.10 port 37854 May 13 23:45:55.528549 sshd-session[7184]: pam_unix(sshd:session): session closed for user core May 13 23:45:55.531975 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:37854.service: Deactivated successfully. May 13 23:45:55.534355 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:45:55.535392 systemd-logind[1754]: Session 22 logged out. Waiting for processes to exit. May 13 23:45:55.536474 systemd-logind[1754]: Removed session 22. May 13 23:45:56.535879 containerd[1805]: time="2025-05-13T23:45:56.535812520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"f31963e216e8bde97996bb96967377b75db5b9fc09f1c122813ef3c6702db498\" pid:7212 exited_at:{seconds:1747179956 nanos:535554160}" May 13 23:46:00.619459 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:43056.service - OpenSSH per-connection server daemon (10.200.16.10:43056). May 13 23:46:01.110338 sshd[7222]: Accepted publickey for core from 10.200.16.10 port 43056 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:01.111683 sshd-session[7222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:01.116779 systemd-logind[1754]: New session 23 of user core. May 13 23:46:01.121738 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:46:01.536034 sshd[7224]: Connection closed by 10.200.16.10 port 43056 May 13 23:46:01.536743 sshd-session[7222]: pam_unix(sshd:session): session closed for user core May 13 23:46:01.540042 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:43056.service: Deactivated successfully. May 13 23:46:01.542464 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:46:01.543447 systemd-logind[1754]: Session 23 logged out. Waiting for processes to exit. May 13 23:46:01.545067 systemd-logind[1754]: Removed session 23. May 13 23:46:06.621401 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:43062.service - OpenSSH per-connection server daemon (10.200.16.10:43062). May 13 23:46:07.082420 sshd[7238]: Accepted publickey for core from 10.200.16.10 port 43062 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:07.083701 sshd-session[7238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:07.088377 systemd-logind[1754]: New session 24 of user core. May 13 23:46:07.091731 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:46:07.479042 sshd[7240]: Connection closed by 10.200.16.10 port 43062 May 13 23:46:07.478125 sshd-session[7238]: pam_unix(sshd:session): session closed for user core May 13 23:46:07.480947 systemd-logind[1754]: Session 24 logged out. Waiting for processes to exit. May 13 23:46:07.481240 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:43062.service: Deactivated successfully. May 13 23:46:07.483499 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:46:07.485268 systemd-logind[1754]: Removed session 24. May 13 23:46:09.622567 containerd[1805]: time="2025-05-13T23:46:09.622363977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"fa6671cb970ba01a8c24fba4a5490092a31c90237423d25b799cfd5f256e879e\" pid:7263 exited_at:{seconds:1747179969 nanos:622129337}" May 13 23:46:12.560349 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:45322.service - OpenSSH per-connection server daemon (10.200.16.10:45322). May 13 23:46:13.025126 sshd[7275]: Accepted publickey for core from 10.200.16.10 port 45322 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:13.026510 sshd-session[7275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:13.031474 systemd-logind[1754]: New session 25 of user core. May 13 23:46:13.038727 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:46:13.434251 sshd[7277]: Connection closed by 10.200.16.10 port 45322 May 13 23:46:13.434784 sshd-session[7275]: pam_unix(sshd:session): session closed for user core May 13 23:46:13.438145 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:45322.service: Deactivated successfully. May 13 23:46:13.440168 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:46:13.441071 systemd-logind[1754]: Session 25 logged out. Waiting for processes to exit. May 13 23:46:13.442339 systemd-logind[1754]: Removed session 25. May 13 23:46:18.525536 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:53044.service - OpenSSH per-connection server daemon (10.200.16.10:53044). May 13 23:46:19.021484 sshd[7289]: Accepted publickey for core from 10.200.16.10 port 53044 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:19.022773 sshd-session[7289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:19.027423 systemd-logind[1754]: New session 26 of user core. May 13 23:46:19.030733 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:46:19.447282 sshd[7291]: Connection closed by 10.200.16.10 port 53044 May 13 23:46:19.447893 sshd-session[7289]: pam_unix(sshd:session): session closed for user core May 13 23:46:19.451694 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:53044.service: Deactivated successfully. May 13 23:46:19.453488 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:46:19.455212 systemd-logind[1754]: Session 26 logged out. Waiting for processes to exit. May 13 23:46:19.456334 systemd-logind[1754]: Removed session 26. May 13 23:46:24.537165 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:53056.service - OpenSSH per-connection server daemon (10.200.16.10:53056). May 13 23:46:24.559007 containerd[1805]: time="2025-05-13T23:46:24.558962654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51f96b3abd026eaa7a450d8dc473d97a42ffe8b2fb7161a1f74998358d1e1c4a\" id:\"895222567812ba170b36b6abf03d754eb7313044bdfb96f876cc95292d86746f\" pid:7316 exited_at:{seconds:1747179984 nanos:558260973}" May 13 23:46:25.032657 sshd[7327]: Accepted publickey for core from 10.200.16.10 port 53056 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:25.033948 sshd-session[7327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:25.038071 systemd-logind[1754]: New session 27 of user core. May 13 23:46:25.043736 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 23:46:25.452222 sshd[7331]: Connection closed by 10.200.16.10 port 53056 May 13 23:46:25.452809 sshd-session[7327]: pam_unix(sshd:session): session closed for user core May 13 23:46:25.455847 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:53056.service: Deactivated successfully. May 13 23:46:25.457497 systemd[1]: session-27.scope: Deactivated successfully. May 13 23:46:25.459299 systemd-logind[1754]: Session 27 logged out. Waiting for processes to exit. May 13 23:46:25.460959 systemd-logind[1754]: Removed session 27. May 13 23:46:26.534826 containerd[1805]: time="2025-05-13T23:46:26.534770541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da9dbd9aa5a0a666af78d4c3f07c5fcee4fa48134e508fbfd8f09c7cf7a917a9\" id:\"d2736bcbe1f96a243d607333b179e5c2d5b863164b8702dd5202a98530cb7d98\" pid:7354 exited_at:{seconds:1747179986 nanos:534372260}"