May 13 23:41:53.314180 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:41:53.314203 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:41:53.314211 kernel: KASLR enabled May 13 23:41:53.314217 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 13 23:41:53.314224 kernel: printk: bootconsole [pl11] enabled May 13 23:41:53.314230 kernel: efi: EFI v2.7 by EDK II May 13 23:41:53.314237 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 13 23:41:53.314243 kernel: random: crng init done May 13 23:41:53.314248 kernel: secureboot: Secure boot disabled May 13 23:41:53.314254 kernel: ACPI: Early table checksum verification disabled May 13 23:41:53.314260 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 13 23:41:53.314266 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314272 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314279 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 13 23:41:53.314287 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314293 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314299 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314307 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314313 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314319 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314325 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 13 23:41:53.314331 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:41:53.314337 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 13 23:41:53.314344 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 13 23:41:53.314350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 13 23:41:53.314358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 13 23:41:53.314364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 13 23:41:53.314371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 13 23:41:53.314379 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 13 23:41:53.314385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 13 23:41:53.314391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 13 23:41:53.314397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 13 23:41:53.314404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 13 23:41:53.314410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 13 23:41:53.314416 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 13 23:41:53.314422 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 13 23:41:53.314429 kernel: Zone ranges: May 13 23:41:53.314435 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 13 23:41:53.314441 kernel: DMA32 empty May 13 23:41:53.314447 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 13 23:41:53.314457 kernel: Movable zone start for each node May 13 23:41:53.314463 kernel: Early memory node ranges May 13 23:41:53.314470 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 13 23:41:53.314476 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 13 23:41:53.314483 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 13 23:41:53.314491 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 13 23:41:53.314497 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 13 23:41:53.314504 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 13 23:41:53.314510 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 13 23:41:53.314516 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 13 23:41:53.314523 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 13 23:41:53.314529 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 13 23:41:53.314536 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 13 23:41:53.314542 kernel: psci: probing for conduit method from ACPI. May 13 23:41:53.314549 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:41:53.314555 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:41:53.314561 kernel: psci: MIGRATE_INFO_TYPE not supported. May 13 23:41:53.314569 kernel: psci: SMC Calling Convention v1.4 May 13 23:41:53.314576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 13 23:41:53.314582 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 13 23:41:53.314589 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:41:53.314595 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:41:53.314602 kernel: pcpu-alloc: [0] 0 [0] 1 May 13 23:41:53.314608 kernel: Detected PIPT I-cache on CPU0 May 13 23:41:53.314615 kernel: CPU features: detected: GIC system register CPU interface May 13 23:41:53.314621 kernel: CPU features: detected: Hardware dirty bit management May 13 23:41:53.314628 kernel: CPU features: detected: Spectre-BHB May 13 23:41:53.314634 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:41:53.314642 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:41:53.314649 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:41:53.316684 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 13 23:41:53.316707 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:41:53.316715 kernel: alternatives: applying boot alternatives May 13 23:41:53.316723 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:41:53.316730 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:41:53.316737 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:41:53.316744 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:41:53.316751 kernel: Fallback order for Node 0: 0 May 13 23:41:53.316757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 13 23:41:53.316769 kernel: Policy zone: Normal May 13 23:41:53.316776 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:41:53.316783 kernel: software IO TLB: area num 2. May 13 23:41:53.316789 kernel: software IO TLB: mapped [mem 0x0000000036520000-0x000000003a520000] (64MB) May 13 23:41:53.316796 kernel: Memory: 3983464K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 210696K reserved, 0K cma-reserved) May 13 23:41:53.316803 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:41:53.316809 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:41:53.316817 kernel: rcu: RCU event tracing is enabled. May 13 23:41:53.316824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:41:53.316831 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:41:53.316838 kernel: Tracing variant of Tasks RCU enabled. May 13 23:41:53.316846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:41:53.316853 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:41:53.316860 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:41:53.316866 kernel: GICv3: 960 SPIs implemented May 13 23:41:53.316873 kernel: GICv3: 0 Extended SPIs implemented May 13 23:41:53.316879 kernel: Root IRQ handler: gic_handle_irq May 13 23:41:53.316886 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:41:53.316893 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 13 23:41:53.316899 kernel: ITS: No ITS available, not enabling LPIs May 13 23:41:53.316906 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:41:53.316912 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:41:53.316919 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:41:53.316927 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:41:53.316934 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:41:53.316941 kernel: Console: colour dummy device 80x25 May 13 23:41:53.316948 kernel: printk: console [tty1] enabled May 13 23:41:53.316954 kernel: ACPI: Core revision 20230628 May 13 23:41:53.316961 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:41:53.316968 kernel: pid_max: default: 32768 minimum: 301 May 13 23:41:53.316975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:41:53.316982 kernel: landlock: Up and running. May 13 23:41:53.316990 kernel: SELinux: Initializing. May 13 23:41:53.316997 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:41:53.317004 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:41:53.317010 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:41:53.317017 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:41:53.317024 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 13 23:41:53.317031 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 13 23:41:53.317045 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 13 23:41:53.317052 kernel: rcu: Hierarchical SRCU implementation. May 13 23:41:53.317059 kernel: rcu: Max phase no-delay instances is 400. May 13 23:41:53.317066 kernel: Remapping and enabling EFI services. May 13 23:41:53.317073 kernel: smp: Bringing up secondary CPUs ... May 13 23:41:53.317082 kernel: Detected PIPT I-cache on CPU1 May 13 23:41:53.317089 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 13 23:41:53.317096 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:41:53.317103 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:41:53.317110 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:41:53.317119 kernel: SMP: Total of 2 processors activated. May 13 23:41:53.317126 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:41:53.317133 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 13 23:41:53.317140 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:41:53.317147 kernel: CPU features: detected: CRC32 instructions May 13 23:41:53.317154 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:41:53.317161 kernel: CPU features: detected: LSE atomic instructions May 13 23:41:53.317168 kernel: CPU features: detected: Privileged Access Never May 13 23:41:53.317176 kernel: CPU: All CPU(s) started at EL1 May 13 23:41:53.317184 kernel: alternatives: applying system-wide alternatives May 13 23:41:53.317192 kernel: devtmpfs: initialized May 13 23:41:53.317199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:41:53.317207 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:41:53.317214 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:41:53.317220 kernel: SMBIOS 3.1.0 present. May 13 23:41:53.317228 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 13 23:41:53.317235 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:41:53.317242 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:41:53.317251 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:41:53.317258 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:41:53.317265 kernel: audit: initializing netlink subsys (disabled) May 13 23:41:53.317272 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 13 23:41:53.317279 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:41:53.317286 kernel: cpuidle: using governor menu May 13 23:41:53.317293 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:41:53.317300 kernel: ASID allocator initialised with 32768 entries May 13 23:41:53.317307 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:41:53.317315 kernel: Serial: AMBA PL011 UART driver May 13 23:41:53.317322 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:41:53.317330 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:41:53.317337 kernel: Modules: 509232 pages in range for PLT usage May 13 23:41:53.317344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:41:53.317351 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:41:53.317358 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:41:53.317365 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:41:53.317372 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:41:53.317382 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:41:53.317389 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:41:53.317397 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:41:53.317404 kernel: ACPI: Added _OSI(Module Device) May 13 23:41:53.317411 kernel: ACPI: Added _OSI(Processor Device) May 13 23:41:53.317418 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:41:53.317425 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:41:53.317432 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:41:53.317439 kernel: ACPI: Interpreter enabled May 13 23:41:53.317447 kernel: ACPI: Using GIC for interrupt routing May 13 23:41:53.317454 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 13 23:41:53.317462 kernel: printk: console [ttyAMA0] enabled May 13 23:41:53.317469 kernel: printk: bootconsole [pl11] disabled May 13 23:41:53.317476 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 13 23:41:53.317483 kernel: iommu: Default domain type: Translated May 13 23:41:53.317490 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:41:53.317497 kernel: efivars: Registered efivars operations May 13 23:41:53.317504 kernel: vgaarb: loaded May 13 23:41:53.317513 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:41:53.317520 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:41:53.317527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:41:53.317534 kernel: pnp: PnP ACPI init May 13 23:41:53.317541 kernel: pnp: PnP ACPI: found 0 devices May 13 23:41:53.317548 kernel: NET: Registered PF_INET protocol family May 13 23:41:53.317555 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:41:53.317562 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:41:53.317569 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:41:53.317577 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:41:53.317585 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:41:53.317592 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:41:53.317599 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:41:53.317606 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:41:53.317613 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:41:53.317620 kernel: PCI: CLS 0 bytes, default 64 May 13 23:41:53.317628 kernel: kvm [1]: HYP mode not available May 13 23:41:53.317635 kernel: Initialise system trusted keyrings May 13 23:41:53.317643 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:41:53.317651 kernel: Key type asymmetric registered May 13 23:41:53.317668 kernel: Asymmetric key parser 'x509' registered May 13 23:41:53.317675 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:41:53.317682 kernel: io scheduler mq-deadline registered May 13 23:41:53.317690 kernel: io scheduler kyber registered May 13 23:41:53.317697 kernel: io scheduler bfq registered May 13 23:41:53.317704 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:41:53.317711 kernel: thunder_xcv, ver 1.0 May 13 23:41:53.317720 kernel: thunder_bgx, ver 1.0 May 13 23:41:53.317727 kernel: nicpf, ver 1.0 May 13 23:41:53.317734 kernel: nicvf, ver 1.0 May 13 23:41:53.317878 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:41:53.317953 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:41:52 UTC (1747179712) May 13 23:41:53.317964 kernel: efifb: probing for efifb May 13 23:41:53.317971 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 13 23:41:53.317978 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 13 23:41:53.317988 kernel: efifb: scrolling: redraw May 13 23:41:53.317995 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:41:53.318002 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:41:53.318009 kernel: fb0: EFI VGA frame buffer device May 13 23:41:53.318015 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 13 23:41:53.318023 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:41:53.318030 kernel: No ACPI PMU IRQ for CPU0 May 13 23:41:53.318036 kernel: No ACPI PMU IRQ for CPU1 May 13 23:41:53.318043 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 13 23:41:53.318052 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:41:53.318059 kernel: watchdog: Hard watchdog permanently disabled May 13 23:41:53.318066 kernel: NET: Registered PF_INET6 protocol family May 13 23:41:53.318073 kernel: Segment Routing with IPv6 May 13 23:41:53.318080 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:41:53.318087 kernel: NET: Registered PF_PACKET protocol family May 13 23:41:53.318094 kernel: Key type dns_resolver registered May 13 23:41:53.318101 kernel: registered taskstats version 1 May 13 23:41:53.318108 kernel: Loading compiled-in X.509 certificates May 13 23:41:53.318117 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:41:53.318124 kernel: Key type .fscrypt registered May 13 23:41:53.318131 kernel: Key type fscrypt-provisioning registered May 13 23:41:53.318138 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:41:53.318145 kernel: ima: Allocated hash algorithm: sha1 May 13 23:41:53.318152 kernel: ima: No architecture policies found May 13 23:41:53.318159 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:41:53.318166 kernel: clk: Disabling unused clocks May 13 23:41:53.318173 kernel: Freeing unused kernel memory: 38464K May 13 23:41:53.318182 kernel: Run /init as init process May 13 23:41:53.318188 kernel: with arguments: May 13 23:41:53.318195 kernel: /init May 13 23:41:53.318202 kernel: with environment: May 13 23:41:53.318209 kernel: HOME=/ May 13 23:41:53.318216 kernel: TERM=linux May 13 23:41:53.318223 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:41:53.318230 systemd[1]: Successfully made /usr/ read-only. May 13 23:41:53.318242 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:41:53.318251 systemd[1]: Detected virtualization microsoft. May 13 23:41:53.318258 systemd[1]: Detected architecture arm64. May 13 23:41:53.318266 systemd[1]: Running in initrd. May 13 23:41:53.318273 systemd[1]: No hostname configured, using default hostname. May 13 23:41:53.318281 systemd[1]: Hostname set to . May 13 23:41:53.318288 systemd[1]: Initializing machine ID from random generator. May 13 23:41:53.318296 systemd[1]: Queued start job for default target initrd.target. May 13 23:41:53.318305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:41:53.318313 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:41:53.318321 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:41:53.318329 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:41:53.318336 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:41:53.318345 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:41:53.318354 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:41:53.318363 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:41:53.318371 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:41:53.318378 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:41:53.318386 systemd[1]: Reached target paths.target - Path Units. May 13 23:41:53.318394 systemd[1]: Reached target slices.target - Slice Units. May 13 23:41:53.318402 systemd[1]: Reached target swap.target - Swaps. May 13 23:41:53.318409 systemd[1]: Reached target timers.target - Timer Units. May 13 23:41:53.318417 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:41:53.318426 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:41:53.318434 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:41:53.318442 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:41:53.318450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:41:53.318457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:41:53.318465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:41:53.318473 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:41:53.318480 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:41:53.318488 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:41:53.318497 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:41:53.318505 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:41:53.318513 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:41:53.318520 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:41:53.318544 systemd-journald[218]: Collecting audit messages is disabled. May 13 23:41:53.318565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:53.318574 systemd-journald[218]: Journal started May 13 23:41:53.318593 systemd-journald[218]: Runtime Journal (/run/log/journal/71860175124e4a768b0eee729693d5e7) is 8M, max 78.5M, 70.5M free. May 13 23:41:53.326838 systemd-modules-load[220]: Inserted module 'overlay' May 13 23:41:53.347196 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:41:53.347903 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:41:53.373863 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:41:53.374726 kernel: Bridge firewalling registered May 13 23:41:53.365782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:41:53.373945 systemd-modules-load[220]: Inserted module 'br_netfilter' May 13 23:41:53.381418 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:41:53.391836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:41:53.402338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:53.416795 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:41:53.432792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:41:53.451775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:41:53.471465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:41:53.488528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:41:53.498484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:41:53.511024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:41:53.522819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:41:53.538926 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:41:53.561795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:41:53.575489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:41:53.592074 dracut-cmdline[252]: dracut-dracut-053 May 13 23:41:53.600893 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:41:53.633417 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:41:53.644904 systemd-resolved[253]: Positive Trust Anchors: May 13 23:41:53.644915 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:41:53.644946 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:41:53.647119 systemd-resolved[253]: Defaulting to hostname 'linux'. May 13 23:41:53.648471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:41:53.655835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:41:53.760681 kernel: SCSI subsystem initialized May 13 23:41:53.769670 kernel: Loading iSCSI transport class v2.0-870. May 13 23:41:53.778694 kernel: iscsi: registered transport (tcp) May 13 23:41:53.796021 kernel: iscsi: registered transport (qla4xxx) May 13 23:41:53.796077 kernel: QLogic iSCSI HBA Driver May 13 23:41:53.829863 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:41:53.838805 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:41:53.882681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:41:53.892409 kernel: device-mapper: uevent: version 1.0.3 May 13 23:41:53.892456 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:41:53.942684 kernel: raid6: neonx8 gen() 15796 MB/s May 13 23:41:53.960668 kernel: raid6: neonx4 gen() 15826 MB/s May 13 23:41:53.980669 kernel: raid6: neonx2 gen() 13207 MB/s May 13 23:41:54.001669 kernel: raid6: neonx1 gen() 10532 MB/s May 13 23:41:54.021667 kernel: raid6: int64x8 gen() 6798 MB/s May 13 23:41:54.041667 kernel: raid6: int64x4 gen() 7360 MB/s May 13 23:41:54.062668 kernel: raid6: int64x2 gen() 6115 MB/s May 13 23:41:54.086166 kernel: raid6: int64x1 gen() 5058 MB/s May 13 23:41:54.086176 kernel: raid6: using algorithm neonx4 gen() 15826 MB/s May 13 23:41:54.109863 kernel: raid6: .... xor() 12418 MB/s, rmw enabled May 13 23:41:54.109880 kernel: raid6: using neon recovery algorithm May 13 23:41:54.122362 kernel: xor: measuring software checksum speed May 13 23:41:54.122379 kernel: 8regs : 21607 MB/sec May 13 23:41:54.129377 kernel: 32regs : 19831 MB/sec May 13 23:41:54.129393 kernel: arm64_neon : 27974 MB/sec May 13 23:41:54.133538 kernel: xor: using function: arm64_neon (27974 MB/sec) May 13 23:41:54.183682 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:41:54.194316 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:41:54.206895 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:41:54.245182 systemd-udevd[438]: Using default interface naming scheme 'v255'. May 13 23:41:54.250922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:41:54.261805 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:41:54.293231 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation May 13 23:41:54.319031 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:41:54.328813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:41:54.384552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:41:54.401969 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:41:54.433504 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:41:54.442005 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:41:54.457586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:41:54.477595 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:41:54.500797 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:41:54.532767 kernel: hv_vmbus: Vmbus version:5.3 May 13 23:41:54.532790 kernel: hv_vmbus: registering driver hid_hyperv May 13 23:41:54.525747 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:41:54.525898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:41:54.539881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:41:54.610438 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:41:54.610459 kernel: hv_vmbus: registering driver hyperv_keyboard May 13 23:41:54.610468 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 13 23:41:54.610479 kernel: hv_vmbus: registering driver hv_netvsc May 13 23:41:54.610488 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 13 23:41:54.610506 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:41:54.610515 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 13 23:41:54.595304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:41:54.632081 kernel: PTP clock support registered May 13 23:41:54.632102 kernel: hv_vmbus: registering driver hv_storvsc May 13 23:41:54.595534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:54.661562 kernel: hv_utils: Registering HyperV Utility Driver May 13 23:41:54.661589 kernel: hv_vmbus: registering driver hv_utils May 13 23:41:54.661599 kernel: hv_utils: Heartbeat IC version 3.0 May 13 23:41:54.661610 kernel: hv_utils: Shutdown IC version 3.2 May 13 23:41:54.661619 kernel: hv_utils: TimeSync IC version 4.0 May 13 23:41:54.628244 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:54.905693 kernel: hv_netvsc 000d3af9-ebb7-000d-3af9-ebb7000d3af9 eth0: VF slot 1 added May 13 23:41:54.905914 kernel: scsi host0: storvsc_host_t May 13 23:41:54.906055 kernel: scsi host1: storvsc_host_t May 13 23:41:54.906177 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 13 23:41:54.906199 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 13 23:41:54.645965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:54.866973 systemd-resolved[253]: Clock change detected. Flushing caches. May 13 23:41:54.931891 kernel: hv_vmbus: registering driver hv_pci May 13 23:41:54.893194 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:41:54.893880 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:41:54.964746 kernel: hv_pci 505ce26b-676f-43eb-9801-4926e6ae9d1a: PCI VMBus probing: Using version 0x10004 May 13 23:41:54.964926 kernel: hv_pci 505ce26b-676f-43eb-9801-4926e6ae9d1a: PCI host bridge to bus 676f:00 May 13 23:41:54.965014 kernel: pci_bus 676f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 13 23:41:54.930417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:41:54.986659 kernel: pci_bus 676f:00: No busn resource found for root bus, will use [bus 00-ff] May 13 23:41:54.932616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:55.008536 kernel: pci 676f:00:02.0: [15b3:1018] type 00 class 0x020000 May 13 23:41:55.008581 kernel: pci 676f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 13 23:41:54.964958 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:41:55.019909 kernel: pci 676f:00:02.0: enabling Extended Tags May 13 23:41:54.973379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:41:55.034072 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 13 23:41:55.030179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:41:55.086809 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:41:55.086859 kernel: pci 676f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 676f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 13 23:41:55.087066 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 13 23:41:55.087184 kernel: pci_bus 676f:00: busn_res: [bus 00-ff] end is updated to 00 May 13 23:41:55.087296 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 13 23:41:55.087395 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 23:41:55.087477 kernel: pci 676f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 13 23:41:55.087562 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 13 23:41:55.070343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:41:55.109024 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 13 23:41:55.109187 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 13 23:41:55.115440 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:41:55.119384 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 23:41:55.143971 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:41:55.181154 kernel: mlx5_core 676f:00:02.0: enabling device (0000 -> 0002) May 13 23:41:55.188230 kernel: mlx5_core 676f:00:02.0: firmware version: 16.30.1284 May 13 23:41:55.382240 kernel: hv_netvsc 000d3af9-ebb7-000d-3af9-ebb7000d3af9 eth0: VF registering: eth1 May 13 23:41:55.382495 kernel: mlx5_core 676f:00:02.0 eth1: joined to eth0 May 13 23:41:55.392698 kernel: mlx5_core 676f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 13 23:41:55.403236 kernel: mlx5_core 676f:00:02.0 enP26479s1: renamed from eth1 May 13 23:41:55.612000 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 13 23:41:55.634265 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (496) May 13 23:41:55.655248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:41:55.673987 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (493) May 13 23:41:55.686433 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 13 23:41:55.693263 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 13 23:41:55.709381 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:41:55.763986 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 13 23:41:56.751966 disk-uuid[603]: The operation has completed successfully. May 13 23:41:56.757936 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:41:56.821315 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:41:56.823236 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:41:56.856696 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:41:56.879664 sh[692]: Success May 13 23:41:56.909364 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:41:57.110892 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:41:57.122051 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:41:57.138578 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:41:57.170427 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:41:57.170493 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:41:57.177224 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:41:57.182244 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:41:57.186408 kernel: BTRFS info (device dm-0): using free space tree May 13 23:41:57.598000 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:41:57.603512 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:41:57.606349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:41:57.617332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:41:57.669333 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:41:57.669392 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:41:57.673798 kernel: BTRFS info (device sda6): using free space tree May 13 23:41:57.693507 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:41:57.703270 kernel: BTRFS info (device sda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:41:57.707923 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:41:57.720341 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:41:57.747244 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:41:57.760982 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:41:57.799723 systemd-networkd[873]: lo: Link UP May 13 23:41:57.799733 systemd-networkd[873]: lo: Gained carrier May 13 23:41:57.801333 systemd-networkd[873]: Enumeration completed May 13 23:41:57.801417 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:41:57.809559 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:41:57.809562 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:41:57.809704 systemd[1]: Reached target network.target - Network. May 13 23:41:57.888227 kernel: mlx5_core 676f:00:02.0 enP26479s1: Link up May 13 23:41:57.928227 kernel: hv_netvsc 000d3af9-ebb7-000d-3af9-ebb7000d3af9 eth0: Data path switched to VF: enP26479s1 May 13 23:41:57.929095 systemd-networkd[873]: enP26479s1: Link UP May 13 23:41:57.929358 systemd-networkd[873]: eth0: Link UP May 13 23:41:57.929743 systemd-networkd[873]: eth0: Gained carrier May 13 23:41:57.929752 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:41:57.953904 systemd-networkd[873]: enP26479s1: Gained carrier May 13 23:41:57.965253 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:41:58.507417 ignition[830]: Ignition 2.20.0 May 13 23:41:58.508251 ignition[830]: Stage: fetch-offline May 13 23:41:58.510710 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:41:58.508293 ignition[830]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.524355 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:41:58.508307 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.508498 ignition[830]: parsed url from cmdline: "" May 13 23:41:58.508504 ignition[830]: no config URL provided May 13 23:41:58.508513 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:41:58.508523 ignition[830]: no config at "/usr/lib/ignition/user.ign" May 13 23:41:58.508532 ignition[830]: failed to fetch config: resource requires networking May 13 23:41:58.508745 ignition[830]: Ignition finished successfully May 13 23:41:58.564114 ignition[882]: Ignition 2.20.0 May 13 23:41:58.564121 ignition[882]: Stage: fetch May 13 23:41:58.564285 ignition[882]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.564294 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.564386 ignition[882]: parsed url from cmdline: "" May 13 23:41:58.564390 ignition[882]: no config URL provided May 13 23:41:58.564394 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:41:58.564401 ignition[882]: no config at "/usr/lib/ignition/user.ign" May 13 23:41:58.564424 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 13 23:41:58.654367 ignition[882]: GET result: OK May 13 23:41:58.654484 ignition[882]: config has been read from IMDS userdata May 13 23:41:58.659304 unknown[882]: fetched base config from "system" May 13 23:41:58.654524 ignition[882]: parsing config with SHA512: 337a03d0788450270402b7326bd96e98e113e526e6e7fa90241b0a025c6fdb8e41d41bc735cb4d7015a2e71770296fcf937e21e619aa32ec67e589cc8e43e75a May 13 23:41:58.659312 unknown[882]: fetched base config from "system" May 13 23:41:58.659689 ignition[882]: fetch: fetch complete May 13 23:41:58.659317 unknown[882]: fetched user config from "azure" May 13 23:41:58.659701 ignition[882]: fetch: fetch passed May 13 23:41:58.665925 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:41:58.659743 ignition[882]: Ignition finished successfully May 13 23:41:58.676355 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:41:58.720723 ignition[889]: Ignition 2.20.0 May 13 23:41:58.720819 ignition[889]: Stage: kargs May 13 23:41:58.721001 ignition[889]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.728292 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:41:58.721011 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.741390 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:41:58.723774 ignition[889]: kargs: kargs passed May 13 23:41:58.723985 ignition[889]: Ignition finished successfully May 13 23:41:58.777623 ignition[895]: Ignition 2.20.0 May 13 23:41:58.777633 ignition[895]: Stage: disks May 13 23:41:58.783430 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:41:58.777790 ignition[895]: no configs at "/usr/lib/ignition/base.d" May 13 23:41:58.791756 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:41:58.777799 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:41:58.798076 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:41:58.781860 ignition[895]: disks: disks passed May 13 23:41:58.810651 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:41:58.781913 ignition[895]: Ignition finished successfully May 13 23:41:58.820468 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:41:58.831446 systemd[1]: Reached target basic.target - Basic System. May 13 23:41:58.845385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:41:58.922541 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 13 23:41:58.931358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:41:58.941316 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:41:59.008228 kernel: EXT4-fs (sda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:41:59.008891 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:41:59.013616 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:41:59.061330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:41:59.078988 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:41:59.099341 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:41:59.106496 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) May 13 23:41:59.125731 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:41:59.125786 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:41:59.125805 kernel: BTRFS info (device sda6): using free space tree May 13 23:41:59.119400 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:41:59.149199 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:41:59.119455 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:41:59.157563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:41:59.167481 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:41:59.180392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:41:59.224329 systemd-networkd[873]: enP26479s1: Gained IPv6LL May 13 23:41:59.416370 systemd-networkd[873]: eth0: Gained IPv6LL May 13 23:41:59.730532 coreos-metadata[917]: May 13 23:41:59.730 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:41:59.739114 coreos-metadata[917]: May 13 23:41:59.738 INFO Fetch successful May 13 23:41:59.739114 coreos-metadata[917]: May 13 23:41:59.739 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 13 23:41:59.755507 coreos-metadata[917]: May 13 23:41:59.755 INFO Fetch successful May 13 23:41:59.769264 coreos-metadata[917]: May 13 23:41:59.769 INFO wrote hostname ci-4284.0.0-n-13ce75130c to /sysroot/etc/hostname May 13 23:41:59.778773 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:41:59.941713 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:41:59.979055 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory May 13 23:41:59.987941 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:41:59.995994 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:42:00.849037 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:42:00.858336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:42:00.878948 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:42:00.893238 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:42:00.907515 kernel: BTRFS info (device sda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:00.913657 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:42:00.939076 ignition[1039]: INFO : Ignition 2.20.0 May 13 23:42:00.939076 ignition[1039]: INFO : Stage: mount May 13 23:42:00.939076 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:00.939076 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:00.970862 ignition[1039]: INFO : mount: mount passed May 13 23:42:00.970862 ignition[1039]: INFO : Ignition finished successfully May 13 23:42:00.944791 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:42:00.955300 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:42:00.990776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:42:01.018232 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) May 13 23:42:01.030776 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:01.030824 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:01.034939 kernel: BTRFS info (device sda6): using free space tree May 13 23:42:01.036228 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:42:01.042837 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:42:01.073319 ignition[1067]: INFO : Ignition 2.20.0 May 13 23:42:01.073319 ignition[1067]: INFO : Stage: files May 13 23:42:01.080911 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:01.080911 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:01.080911 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping May 13 23:42:01.098537 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:42:01.098537 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:42:01.167769 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:42:01.174992 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:42:01.174992 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:42:01.168154 unknown[1067]: wrote ssh authorized keys file for user: core May 13 23:42:01.235566 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:42:01.245884 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:42:01.275816 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:42:01.408856 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:01.419707 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 23:42:01.871503 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:42:02.119801 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:02.119801 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:42:02.138897 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:02.138897 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:02.138897 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:42:02.138897 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:42:02.138897 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:42:02.138897 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:02.138897 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:02.138897 ignition[1067]: INFO : files: files passed May 13 23:42:02.138897 ignition[1067]: INFO : Ignition finished successfully May 13 23:42:02.141554 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:42:02.157365 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:42:02.206605 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:42:02.227743 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:42:02.227858 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:42:02.267559 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:02.267559 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:02.284944 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:02.277651 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:02.292309 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:42:02.309366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:42:02.365888 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:42:02.365989 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:42:02.379063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:42:02.390919 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:42:02.401673 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:42:02.404370 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:42:02.445735 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:02.456361 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:42:02.484915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:02.491621 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:02.503628 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:42:02.514431 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:42:02.514561 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:02.530021 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:42:02.535749 systemd[1]: Stopped target basic.target - Basic System. May 13 23:42:02.546485 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:42:02.557183 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:42:02.567671 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:42:02.579098 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:42:02.590197 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:42:02.602385 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:42:02.612728 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:42:02.624920 systemd[1]: Stopped target swap.target - Swaps. May 13 23:42:02.634730 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:42:02.634935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:42:02.649487 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:02.660065 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:02.671673 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:42:02.676658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:02.685061 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:42:02.685268 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:42:02.701743 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:42:02.701925 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:02.716132 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:42:02.716331 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:42:02.727032 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:42:02.727193 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:42:02.741447 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:42:02.751860 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:42:02.802696 ignition[1120]: INFO : Ignition 2.20.0 May 13 23:42:02.802696 ignition[1120]: INFO : Stage: umount May 13 23:42:02.802696 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:02.802696 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:02.802696 ignition[1120]: INFO : umount: umount passed May 13 23:42:02.802696 ignition[1120]: INFO : Ignition finished successfully May 13 23:42:02.752113 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:02.786400 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:42:02.797921 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:42:02.798130 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:02.810367 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:42:02.810525 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:42:02.819373 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:42:02.819458 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:42:02.833547 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:42:02.835044 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:42:02.835278 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:42:02.853445 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:42:02.853510 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:42:02.862794 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:42:02.862883 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:42:02.875099 systemd[1]: Stopped target network.target - Network. May 13 23:42:02.886438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:42:02.886523 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:42:02.897469 systemd[1]: Stopped target paths.target - Path Units. May 13 23:42:02.903617 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:42:02.911247 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:02.923388 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:42:02.930796 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:42:02.941759 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:42:02.941813 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:42:02.952623 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:42:02.952669 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:42:02.962755 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:42:02.962816 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:42:02.973192 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:42:02.973253 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:42:02.984015 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:42:02.993900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:42:03.009563 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:42:03.009670 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:42:03.243054 kernel: hv_netvsc 000d3af9-ebb7-000d-3af9-ebb7000d3af9 eth0: Data path switched from VF: enP26479s1 May 13 23:42:03.022908 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:42:03.023007 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:42:03.041148 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:42:03.041480 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:42:03.041593 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:42:03.051406 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:42:03.054509 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:42:03.054576 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:03.068342 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:42:03.079136 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:42:03.079277 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:42:03.090400 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:42:03.090454 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:03.109413 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:42:03.109476 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:42:03.115295 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:42:03.115338 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:03.133718 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:03.144302 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:42:03.144377 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:03.175023 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:42:03.175182 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:03.187045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:42:03.187091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:42:03.197973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:42:03.198003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:03.210949 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:42:03.211013 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:42:03.236832 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:42:03.236898 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:42:03.254029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:42:03.254096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:03.282386 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:42:03.293141 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:42:03.293312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:03.315353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:03.315413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:03.331264 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:42:03.331342 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:03.331691 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:42:03.331802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:42:03.341502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:42:03.341585 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:42:03.576370 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:42:03.576475 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:42:03.586989 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:42:03.597020 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:42:03.597086 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:42:03.611370 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:42:03.670598 systemd[1]: Switching root. May 13 23:42:03.742784 systemd-journald[218]: Journal stopped May 13 23:42:07.932965 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 13 23:42:07.933008 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:42:07.933020 kernel: SELinux: policy capability open_perms=1 May 13 23:42:07.933032 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:42:07.933040 kernel: SELinux: policy capability always_check_network=0 May 13 23:42:07.933048 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:42:07.933057 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:42:07.933065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:42:07.933073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:42:07.933081 kernel: audit: type=1403 audit(1747179724.320:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:42:07.933092 systemd[1]: Successfully loaded SELinux policy in 133.077ms. May 13 23:42:07.933102 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.380ms. May 13 23:42:07.933115 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:42:07.933124 systemd[1]: Detected virtualization microsoft. May 13 23:42:07.933133 systemd[1]: Detected architecture arm64. May 13 23:42:07.933144 systemd[1]: Detected first boot. May 13 23:42:07.933153 systemd[1]: Hostname set to . May 13 23:42:07.933162 systemd[1]: Initializing machine ID from random generator. May 13 23:42:07.933171 zram_generator::config[1165]: No configuration found. May 13 23:42:07.933181 kernel: NET: Registered PF_VSOCK protocol family May 13 23:42:07.933189 systemd[1]: Populated /etc with preset unit settings. May 13 23:42:07.933201 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:42:07.933222 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:42:07.933234 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:42:07.933243 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:42:07.933252 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:42:07.933262 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:42:07.933271 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:42:07.933280 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:42:07.933292 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:42:07.933301 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:42:07.933310 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:42:07.933319 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:42:07.933330 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:07.933339 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:07.933348 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:42:07.933358 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:42:07.933369 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:42:07.933378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:42:07.933387 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:42:07.933399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:07.933409 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:42:07.933419 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:42:07.933429 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:42:07.933440 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:42:07.933450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:07.933459 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:42:07.933468 systemd[1]: Reached target slices.target - Slice Units. May 13 23:42:07.933477 systemd[1]: Reached target swap.target - Swaps. May 13 23:42:07.933487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:42:07.933496 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:42:07.933506 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:42:07.933517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:07.933527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:42:07.933537 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:07.933547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:42:07.933557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:42:07.933569 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:42:07.933578 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:42:07.933588 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:42:07.933597 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:42:07.933606 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:42:07.933617 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:42:07.933626 systemd[1]: Reached target machines.target - Containers. May 13 23:42:07.933636 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:42:07.933646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:07.933658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:42:07.933667 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:42:07.933677 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:07.933686 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:07.933696 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:07.933706 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:42:07.933715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:07.933726 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:42:07.933737 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:42:07.933747 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:42:07.933757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:42:07.933766 kernel: fuse: init (API version 7.39) May 13 23:42:07.933775 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:42:07.933784 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:07.933794 kernel: loop: module loaded May 13 23:42:07.933803 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:42:07.933814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:42:07.933824 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:42:07.933833 kernel: ACPI: bus type drm_connector registered May 13 23:42:07.933842 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:42:07.933851 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:42:07.933889 systemd-journald[1270]: Collecting audit messages is disabled. May 13 23:42:07.933914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:42:07.933925 systemd-journald[1270]: Journal started May 13 23:42:07.933945 systemd-journald[1270]: Runtime Journal (/run/log/journal/af3a14077f8446da8da7c9ffb033a0f0) is 8M, max 78.5M, 70.5M free. May 13 23:42:06.989344 systemd[1]: Queued start job for default target multi-user.target. May 13 23:42:06.996969 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 23:42:06.997343 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:42:06.997666 systemd[1]: systemd-journald.service: Consumed 3.146s CPU time. May 13 23:42:07.953979 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:42:07.954035 systemd[1]: Stopped verity-setup.service. May 13 23:42:07.971380 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:42:07.972317 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:42:07.978162 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:42:07.984399 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:42:07.989645 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:42:07.995862 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:42:08.002181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:42:08.008237 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:42:08.016583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:08.023812 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:42:08.023970 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:42:08.030559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:08.032256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:08.038848 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:08.039019 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:08.046713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:08.046884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:08.053954 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:42:08.054112 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:42:08.060452 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:08.060602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:08.068377 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:42:08.074830 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:42:08.082020 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:42:08.089159 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:42:08.097702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:08.113105 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:42:08.120797 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:42:08.132314 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:42:08.138860 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:42:08.138964 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:42:08.145922 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:42:08.153510 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:42:08.169115 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:42:08.175349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:08.176831 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:42:08.186350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:42:08.194327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:08.202343 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:42:08.208458 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:08.210434 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:42:08.225731 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:42:08.239257 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:42:08.249392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:42:08.258104 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:42:08.260153 systemd-journald[1270]: Time spent on flushing to /var/log/journal/af3a14077f8446da8da7c9ffb033a0f0 is 14.841ms for 911 entries. May 13 23:42:08.260153 systemd-journald[1270]: System Journal (/var/log/journal/af3a14077f8446da8da7c9ffb033a0f0) is 8M, max 2.6G, 2.6G free. May 13 23:42:08.304413 systemd-journald[1270]: Received client request to flush runtime journal. May 13 23:42:08.273974 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:42:08.286728 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:42:08.296533 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:42:08.306088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:42:08.318193 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:42:08.327046 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:42:08.341761 udevadm[1309]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:42:08.362238 kernel: loop0: detected capacity change from 0 to 126448 May 13 23:42:08.378321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:08.408147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:42:08.409721 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:42:08.577026 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:42:08.584706 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:42:08.678096 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 13 23:42:08.678462 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 13 23:42:08.683531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:08.716539 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:42:08.741237 kernel: loop1: detected capacity change from 0 to 189592 May 13 23:42:08.781291 kernel: loop2: detected capacity change from 0 to 28888 May 13 23:42:09.150235 kernel: loop3: detected capacity change from 0 to 103832 May 13 23:42:09.210871 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:42:09.220583 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:09.251805 systemd-udevd[1330]: Using default interface naming scheme 'v255'. May 13 23:42:09.399713 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:09.410058 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:42:09.464059 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:42:09.475432 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:42:09.548078 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:42:09.568553 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:42:09.600235 kernel: hv_vmbus: registering driver hv_balloon May 13 23:42:09.600315 kernel: loop4: detected capacity change from 0 to 126448 May 13 23:42:09.623160 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 13 23:42:09.623295 kernel: hv_balloon: Memory hot add disabled on ARM64 May 13 23:42:09.623321 kernel: hv_vmbus: registering driver hyperv_fb May 13 23:42:09.645621 kernel: loop5: detected capacity change from 0 to 189592 May 13 23:42:09.645715 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 13 23:42:09.645734 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 13 23:42:09.650595 kernel: Console: switching to colour dummy device 80x25 May 13 23:42:09.664422 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:42:09.673002 kernel: loop6: detected capacity change from 0 to 28888 May 13 23:42:09.673552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:09.685459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:09.687319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:09.702391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:09.710280 kernel: loop7: detected capacity change from 0 to 103832 May 13 23:42:09.717131 (sd-merge)[1373]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 13 23:42:09.721720 (sd-merge)[1373]: Merged extensions into '/usr'. May 13 23:42:09.729441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:09.731256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:09.750290 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1346) May 13 23:42:09.752592 systemd[1]: Reload requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:42:09.752606 systemd[1]: Reloading... May 13 23:42:09.763924 systemd-networkd[1332]: lo: Link UP May 13 23:42:09.763933 systemd-networkd[1332]: lo: Gained carrier May 13 23:42:09.767823 systemd-networkd[1332]: Enumeration completed May 13 23:42:09.769879 systemd-networkd[1332]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:09.770146 systemd-networkd[1332]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:09.837232 kernel: mlx5_core 676f:00:02.0 enP26479s1: Link up May 13 23:42:09.877437 kernel: hv_netvsc 000d3af9-ebb7-000d-3af9-ebb7000d3af9 eth0: Data path switched to VF: enP26479s1 May 13 23:42:09.878269 systemd-networkd[1332]: enP26479s1: Link UP May 13 23:42:09.878396 systemd-networkd[1332]: eth0: Link UP May 13 23:42:09.878399 systemd-networkd[1332]: eth0: Gained carrier May 13 23:42:09.878413 systemd-networkd[1332]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:09.890849 systemd-networkd[1332]: enP26479s1: Gained carrier May 13 23:42:09.903421 systemd-networkd[1332]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:42:09.905242 zram_generator::config[1496]: No configuration found. May 13 23:42:09.991193 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:10.087457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:42:10.094360 systemd[1]: Reloading finished in 341 ms. May 13 23:42:10.109101 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:42:10.117257 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:42:10.125786 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:42:10.166441 systemd[1]: Starting ensure-sysext.service... May 13 23:42:10.173053 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:42:10.181849 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:42:10.189749 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:42:10.201020 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:42:10.209364 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:42:10.218441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:10.252451 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:42:10.257081 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:42:10.257295 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:42:10.257921 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:42:10.258118 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 13 23:42:10.258163 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 13 23:42:10.264256 lvm[1528]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:10.263818 systemd[1]: Reload requested from client PID 1527 ('systemctl') (unit ensure-sysext.service)... May 13 23:42:10.263827 systemd[1]: Reloading... May 13 23:42:10.280706 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:10.280846 systemd-tmpfiles[1532]: Skipping /boot May 13 23:42:10.290617 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:10.292895 systemd-tmpfiles[1532]: Skipping /boot May 13 23:42:10.350303 zram_generator::config[1571]: No configuration found. May 13 23:42:10.454185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:10.553443 systemd[1]: Reloading finished in 289 ms. May 13 23:42:10.570988 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:42:10.578677 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:42:10.586895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:10.594033 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:10.606764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:10.614094 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:10.620439 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:42:10.630939 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:42:10.641446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:42:10.641589 lvm[1635]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:10.660073 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:42:10.670342 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:42:10.684588 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:42:10.697566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:10.706508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:10.714260 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:10.724522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:10.730121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:10.730353 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:10.733588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:10.735533 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:10.742994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:10.743880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:10.752230 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:10.756982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:10.770960 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:42:10.781556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:10.785418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:10.801527 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:10.818454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:10.825003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:10.825133 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:10.827579 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:42:10.837503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:10.837659 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:10.844487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:10.844684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:10.851854 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:10.852002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:10.863125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:10.866565 augenrules[1675]: No rules May 13 23:42:10.868500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:10.876133 systemd-resolved[1638]: Positive Trust Anchors: May 13 23:42:10.876257 systemd-resolved[1638]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:42:10.876290 systemd-resolved[1638]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:42:10.877434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:10.888523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:10.896930 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:10.904305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:10.904434 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:10.904573 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:42:10.905958 systemd-resolved[1638]: Using system hostname 'ci-4284.0.0-n-13ce75130c'. May 13 23:42:10.911451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:42:10.917811 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:10.918032 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:10.924228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:10.924415 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:10.930925 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:10.931086 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:10.937318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:10.938045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:10.945803 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:10.945955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:10.957608 systemd[1]: Finished ensure-sysext.service. May 13 23:42:10.962896 systemd[1]: Reached target network.target - Network. May 13 23:42:10.967595 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:10.974430 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:10.974510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:11.189446 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:42:11.192291 systemd-networkd[1332]: enP26479s1: Gained IPv6LL May 13 23:42:11.197008 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:42:11.768316 systemd-networkd[1332]: eth0: Gained IPv6LL May 13 23:42:11.773714 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:42:11.781787 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:42:13.893036 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:42:13.907305 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:42:13.914967 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:42:13.937606 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:42:13.944014 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:42:13.949927 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:42:13.956516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:42:13.963278 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:42:13.968908 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:42:13.975468 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:42:13.982867 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:42:13.982900 systemd[1]: Reached target paths.target - Path Units. May 13 23:42:13.988316 systemd[1]: Reached target timers.target - Timer Units. May 13 23:42:14.012925 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:42:14.020283 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:42:14.027191 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:42:14.034028 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:42:14.040625 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:42:14.048492 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:42:14.054180 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:42:14.061021 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:42:14.066775 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:42:14.071729 systemd[1]: Reached target basic.target - Basic System. May 13 23:42:14.076789 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:14.076816 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:14.079150 systemd[1]: Starting chronyd.service - NTP client/server... May 13 23:42:14.096324 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:42:14.105355 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:42:14.111982 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:42:14.124736 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:42:14.133579 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:42:14.143256 (chronyd)[1696]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 13 23:42:14.143325 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:42:14.143360 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 13 23:42:14.146741 jq[1700]: false May 13 23:42:14.147334 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 13 23:42:14.154025 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 13 23:42:14.156401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:14.163374 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:42:14.170925 KVP[1705]: KVP starting; pid is:1705 May 13 23:42:14.170927 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:42:14.179272 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:42:14.183412 chronyd[1713]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 13 23:42:14.187717 KVP[1705]: KVP LIC Version: 3.1 May 13 23:42:14.188229 kernel: hv_utils: KVP IC version 4.0 May 13 23:42:14.195534 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:42:14.205908 extend-filesystems[1704]: Found loop4 May 13 23:42:14.205908 extend-filesystems[1704]: Found loop5 May 13 23:42:14.205908 extend-filesystems[1704]: Found loop6 May 13 23:42:14.205908 extend-filesystems[1704]: Found loop7 May 13 23:42:14.205908 extend-filesystems[1704]: Found sda May 13 23:42:14.205908 extend-filesystems[1704]: Found sda1 May 13 23:42:14.205908 extend-filesystems[1704]: Found sda2 May 13 23:42:14.205908 extend-filesystems[1704]: Found sda3 May 13 23:42:14.205908 extend-filesystems[1704]: Found usr May 13 23:42:14.205908 extend-filesystems[1704]: Found sda4 May 13 23:42:14.205908 extend-filesystems[1704]: Found sda6 May 13 23:42:14.205908 extend-filesystems[1704]: Found sda7 May 13 23:42:14.378855 extend-filesystems[1704]: Found sda9 May 13 23:42:14.378855 extend-filesystems[1704]: Checking size of /dev/sda9 May 13 23:42:14.378855 extend-filesystems[1704]: Old size kept for /dev/sda9 May 13 23:42:14.378855 extend-filesystems[1704]: Found sr0 May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.372 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.384 INFO Fetch successful May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.384 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.393 INFO Fetch successful May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.393 INFO Fetching http://168.63.129.16/machine/4ad5c74d-6e82-498a-953d-91fdbc75f11c/8c810f71%2Dde72%2D41b0%2D8b56%2D4d3e11a758de.%5Fci%2D4284.0.0%2Dn%2D13ce75130c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.435 INFO Fetch successful May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.435 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 13 23:42:14.475902 coreos-metadata[1698]: May 13 23:42:14.461 INFO Fetch successful May 13 23:42:14.210974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:42:14.211043 chronyd[1713]: Timezone right/UTC failed leap second check, ignoring May 13 23:42:14.236548 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:42:14.211235 chronyd[1713]: Loaded seccomp filter (level 2) May 13 23:42:14.248417 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:42:14.247106 dbus-daemon[1699]: [system] SELinux support is enabled May 13 23:42:14.523715 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1752) May 13 23:42:14.523750 update_engine[1728]: I20250513 23:42:14.331148 1728 main.cc:92] Flatcar Update Engine starting May 13 23:42:14.523750 update_engine[1728]: I20250513 23:42:14.334158 1728 update_check_scheduler.cc:74] Next update check in 11m59s May 13 23:42:14.248920 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:42:14.523050 dbus-daemon[1699]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:42:14.524181 jq[1732]: true May 13 23:42:14.252476 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:42:14.264437 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:42:14.285163 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:42:14.524707 tar[1758]: linux-arm64/helm May 13 23:42:14.299098 systemd[1]: Started chronyd.service - NTP client/server. May 13 23:42:14.318085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:42:14.320305 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:42:14.320609 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:42:14.320771 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:42:14.351625 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:42:14.353309 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:42:14.369713 systemd-logind[1726]: New seat seat0. May 13 23:42:14.371091 systemd-logind[1726]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:42:14.372543 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:42:14.394155 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:42:14.428514 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:42:14.428709 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:42:14.522010 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:42:14.522036 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:42:14.525546 (ntainerd)[1765]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:42:14.534353 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:42:14.534381 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:42:14.547665 jq[1764]: true May 13 23:42:14.553276 systemd[1]: Started update-engine.service - Update Engine. May 13 23:42:14.580188 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:42:14.594525 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:42:14.608590 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:42:14.771943 bash[1833]: Updated "/home/core/.ssh/authorized_keys" May 13 23:42:14.774148 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:42:14.785406 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:42:14.988766 locksmithd[1801]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:42:15.027547 sshd_keygen[1731]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:42:15.047540 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:42:15.057636 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:42:15.067081 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 13 23:42:15.104179 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:42:15.106390 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:42:15.119551 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:42:15.134701 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 13 23:42:15.147082 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:42:15.163793 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:42:15.174465 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:42:15.185280 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:42:15.295726 tar[1758]: linux-arm64/LICENSE May 13 23:42:15.295726 tar[1758]: linux-arm64/README.md May 13 23:42:15.314332 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:42:15.327962 containerd[1765]: time="2025-05-13T23:42:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:42:15.331550 containerd[1765]: time="2025-05-13T23:42:15.331490320Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:42:15.339973 containerd[1765]: time="2025-05-13T23:42:15.339831440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.92µs" May 13 23:42:15.339973 containerd[1765]: time="2025-05-13T23:42:15.339871720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:42:15.339973 containerd[1765]: time="2025-05-13T23:42:15.339891880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:42:15.340107 containerd[1765]: time="2025-05-13T23:42:15.340049720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:42:15.340107 containerd[1765]: time="2025-05-13T23:42:15.340070000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:42:15.340107 containerd[1765]: time="2025-05-13T23:42:15.340095280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:15.340166 containerd[1765]: time="2025-05-13T23:42:15.340150120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:15.340184 containerd[1765]: time="2025-05-13T23:42:15.340178280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340436440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340458400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340470080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340478120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340560720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340742440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340770040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340779760Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:42:15.340904 containerd[1765]: time="2025-05-13T23:42:15.340817280Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:42:15.341105 containerd[1765]: time="2025-05-13T23:42:15.341061920Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:42:15.341662 containerd[1765]: time="2025-05-13T23:42:15.341120840Z" level=info msg="metadata content store policy set" policy=shared May 13 23:42:15.368878 containerd[1765]: time="2025-05-13T23:42:15.368825080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:42:15.368978 containerd[1765]: time="2025-05-13T23:42:15.368904840Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:42:15.368978 containerd[1765]: time="2025-05-13T23:42:15.368927200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:42:15.368978 containerd[1765]: time="2025-05-13T23:42:15.368941520Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:42:15.368978 containerd[1765]: time="2025-05-13T23:42:15.368957640Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:42:15.368978 containerd[1765]: time="2025-05-13T23:42:15.368968880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:42:15.369093 containerd[1765]: time="2025-05-13T23:42:15.368981040Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:42:15.369093 containerd[1765]: time="2025-05-13T23:42:15.368993040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:42:15.369093 containerd[1765]: time="2025-05-13T23:42:15.369004960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:42:15.369093 containerd[1765]: time="2025-05-13T23:42:15.369015360Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:42:15.369093 containerd[1765]: time="2025-05-13T23:42:15.369025520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:42:15.369093 containerd[1765]: time="2025-05-13T23:42:15.369037240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:42:15.369188 containerd[1765]: time="2025-05-13T23:42:15.369176840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:42:15.369206 containerd[1765]: time="2025-05-13T23:42:15.369197800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369234920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369251640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369263040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369279760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369296160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369307000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369320640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369332720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369343320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369421680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369435720Z" level=info msg="Start snapshots syncer" May 13 23:42:15.369605 containerd[1765]: time="2025-05-13T23:42:15.369462320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:42:15.369843 containerd[1765]: time="2025-05-13T23:42:15.369685360Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:42:15.369843 containerd[1765]: time="2025-05-13T23:42:15.369733160Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:42:15.369958 containerd[1765]: time="2025-05-13T23:42:15.369807520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:42:15.373568 containerd[1765]: time="2025-05-13T23:42:15.373138200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:42:15.373568 containerd[1765]: time="2025-05-13T23:42:15.373358800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:42:15.373568 containerd[1765]: time="2025-05-13T23:42:15.373541240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:42:15.373568 containerd[1765]: time="2025-05-13T23:42:15.373558400Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:42:15.373712 containerd[1765]: time="2025-05-13T23:42:15.373577560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:42:15.373712 containerd[1765]: time="2025-05-13T23:42:15.373683760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:42:15.373712 containerd[1765]: time="2025-05-13T23:42:15.373704040Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:42:15.374230 containerd[1765]: time="2025-05-13T23:42:15.373742480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:42:15.374230 containerd[1765]: time="2025-05-13T23:42:15.374011840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:42:15.374230 containerd[1765]: time="2025-05-13T23:42:15.374028480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374199480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374343520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374360920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374376440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374411360Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374428360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374443760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:42:15.374585 containerd[1765]: time="2025-05-13T23:42:15.374476720Z" level=info msg="runtime interface created" May 13 23:42:15.374739 containerd[1765]: time="2025-05-13T23:42:15.374486600Z" level=info msg="created NRI interface" May 13 23:42:15.374739 containerd[1765]: time="2025-05-13T23:42:15.374666640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:42:15.374739 containerd[1765]: time="2025-05-13T23:42:15.374691480Z" level=info msg="Connect containerd service" May 13 23:42:15.374852 containerd[1765]: time="2025-05-13T23:42:15.374820680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:42:15.376932 containerd[1765]: time="2025-05-13T23:42:15.376836880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:42:15.435699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:15.517177 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:15.899031 kubelet[1888]: E0513 23:42:15.898976 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:15.901435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:15.901691 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:15.903017 systemd[1]: kubelet.service: Consumed 674ms CPU time, 235M memory peak. May 13 23:42:16.943238 containerd[1765]: time="2025-05-13T23:42:16.943132960Z" level=info msg="Start subscribing containerd event" May 13 23:42:16.943238 containerd[1765]: time="2025-05-13T23:42:16.943198320Z" level=info msg="Start recovering state" May 13 23:42:16.943547 containerd[1765]: time="2025-05-13T23:42:16.943346480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:42:16.943547 containerd[1765]: time="2025-05-13T23:42:16.943408120Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943687520Z" level=info msg="Start event monitor" May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943712040Z" level=info msg="Start cni network conf syncer for default" May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943723080Z" level=info msg="Start streaming server" May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943733160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943740640Z" level=info msg="runtime interface starting up..." May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943746000Z" level=info msg="starting plugins..." May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943762640Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:42:16.944001 containerd[1765]: time="2025-05-13T23:42:16.943892760Z" level=info msg="containerd successfully booted in 1.617309s" May 13 23:42:16.944102 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:42:16.952972 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:42:16.963289 systemd[1]: Startup finished in 665ms (kernel) + 11.241s (initrd) + 12.774s (userspace) = 24.682s. May 13 23:42:18.341905 login[1876]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:18.343553 login[1877]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:18.353305 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:42:18.356431 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:42:18.358986 systemd-logind[1726]: New session 1 of user core. May 13 23:42:18.366316 systemd-logind[1726]: New session 2 of user core. May 13 23:42:18.373131 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:42:18.375501 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:42:18.589104 (systemd)[1916]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:42:18.591275 systemd-logind[1726]: New session c1 of user core. May 13 23:42:18.673410 waagent[1873]: 2025-05-13T23:42:18.673338Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 13 23:42:18.679442 waagent[1873]: 2025-05-13T23:42:18.679384Z INFO Daemon Daemon OS: flatcar 4284.0.0 May 13 23:42:18.684140 waagent[1873]: 2025-05-13T23:42:18.684091Z INFO Daemon Daemon Python: 3.11.11 May 13 23:42:18.688920 waagent[1873]: 2025-05-13T23:42:18.688870Z INFO Daemon Daemon Run daemon May 13 23:42:18.692982 waagent[1873]: 2025-05-13T23:42:18.692926Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' May 13 23:42:18.701729 waagent[1873]: 2025-05-13T23:42:18.701668Z INFO Daemon Daemon Using waagent for provisioning May 13 23:42:18.707416 waagent[1873]: 2025-05-13T23:42:18.707366Z INFO Daemon Daemon Activate resource disk May 13 23:42:18.712107 waagent[1873]: 2025-05-13T23:42:18.712056Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 13 23:42:18.723664 waagent[1873]: 2025-05-13T23:42:18.723614Z INFO Daemon Daemon Found device: None May 13 23:42:18.728072 waagent[1873]: 2025-05-13T23:42:18.728026Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 13 23:42:18.747086 waagent[1873]: 2025-05-13T23:42:18.736485Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 13 23:42:18.748047 waagent[1873]: 2025-05-13T23:42:18.748006Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:42:18.754040 waagent[1873]: 2025-05-13T23:42:18.754001Z INFO Daemon Daemon Running default provisioning handler May 13 23:42:18.766415 waagent[1873]: 2025-05-13T23:42:18.766361Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 13 23:42:18.781549 waagent[1873]: 2025-05-13T23:42:18.781497Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 13 23:42:18.791007 waagent[1873]: 2025-05-13T23:42:18.790957Z INFO Daemon Daemon cloud-init is enabled: False May 13 23:42:18.796252 waagent[1873]: 2025-05-13T23:42:18.796197Z INFO Daemon Daemon Copying ovf-env.xml May 13 23:42:18.816570 systemd[1916]: Queued start job for default target default.target. May 13 23:42:18.828522 systemd[1916]: Created slice app.slice - User Application Slice. May 13 23:42:18.828555 systemd[1916]: Reached target paths.target - Paths. May 13 23:42:18.828594 systemd[1916]: Reached target timers.target - Timers. May 13 23:42:18.829812 systemd[1916]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:42:18.839767 systemd[1916]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:42:18.839834 systemd[1916]: Reached target sockets.target - Sockets. May 13 23:42:18.839880 systemd[1916]: Reached target basic.target - Basic System. May 13 23:42:18.839909 systemd[1916]: Reached target default.target - Main User Target. May 13 23:42:18.839933 systemd[1916]: Startup finished in 242ms. May 13 23:42:18.840072 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:42:18.850356 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:42:18.851060 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:42:19.293234 waagent[1873]: 2025-05-13T23:42:19.292599Z INFO Daemon Daemon Successfully mounted dvd May 13 23:42:19.381688 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 13 23:42:19.385652 waagent[1873]: 2025-05-13T23:42:19.385581Z INFO Daemon Daemon Detect protocol endpoint May 13 23:42:19.390756 waagent[1873]: 2025-05-13T23:42:19.390535Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:42:19.396099 waagent[1873]: 2025-05-13T23:42:19.396053Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 13 23:42:19.403932 waagent[1873]: 2025-05-13T23:42:19.403236Z INFO Daemon Daemon Test for route to 168.63.129.16 May 13 23:42:19.408454 waagent[1873]: 2025-05-13T23:42:19.408409Z INFO Daemon Daemon Route to 168.63.129.16 exists May 13 23:42:19.413668 waagent[1873]: 2025-05-13T23:42:19.413633Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 13 23:42:20.489630 waagent[1873]: 2025-05-13T23:42:20.489581Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 13 23:42:21.123394 waagent[1873]: 2025-05-13T23:42:21.082920Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 13 23:42:21.123394 waagent[1873]: 2025-05-13T23:42:21.088049Z INFO Daemon Daemon Server preferred version:2015-04-05 May 13 23:42:23.026248 waagent[1873]: 2025-05-13T23:42:23.025749Z INFO Daemon Daemon Initializing goal state during protocol detection May 13 23:42:23.032457 waagent[1873]: 2025-05-13T23:42:23.032404Z INFO Daemon Daemon Forcing an update of the goal state. May 13 23:42:23.041359 waagent[1873]: 2025-05-13T23:42:23.041318Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:42:23.064589 waagent[1873]: 2025-05-13T23:42:23.064554Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 13 23:42:23.070873 waagent[1873]: 2025-05-13T23:42:23.070835Z INFO Daemon May 13 23:42:23.073783 waagent[1873]: 2025-05-13T23:42:23.073749Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8eb8e5d9-1634-4a44-a6ff-ca2e902f9ef8 eTag: 2900377458648342588 source: Fabric] May 13 23:42:23.084906 waagent[1873]: 2025-05-13T23:42:23.084865Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 13 23:42:23.091663 waagent[1873]: 2025-05-13T23:42:23.091626Z INFO Daemon May 13 23:42:23.094528 waagent[1873]: 2025-05-13T23:42:23.094492Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 13 23:42:23.105500 waagent[1873]: 2025-05-13T23:42:23.105472Z INFO Daemon Daemon Downloading artifacts profile blob May 13 23:42:23.254039 waagent[1873]: 2025-05-13T23:42:23.253961Z INFO Daemon Downloaded certificate {'thumbprint': 'A689E57C5ADCE282E97290043785570CDBE7D796', 'hasPrivateKey': False} May 13 23:42:23.263826 waagent[1873]: 2025-05-13T23:42:23.263783Z INFO Daemon Downloaded certificate {'thumbprint': '0B578C39987D18560EC99028CCCD5492DCF1CA03', 'hasPrivateKey': True} May 13 23:42:23.273964 waagent[1873]: 2025-05-13T23:42:23.273921Z INFO Daemon Fetch goal state completed May 13 23:42:23.285578 waagent[1873]: 2025-05-13T23:42:23.285480Z INFO Daemon Daemon Starting provisioning May 13 23:42:23.291021 waagent[1873]: 2025-05-13T23:42:23.290972Z INFO Daemon Daemon Handle ovf-env.xml. May 13 23:42:23.296153 waagent[1873]: 2025-05-13T23:42:23.296115Z INFO Daemon Daemon Set hostname [ci-4284.0.0-n-13ce75130c] May 13 23:42:23.453239 waagent[1873]: 2025-05-13T23:42:23.453064Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-n-13ce75130c] May 13 23:42:23.465237 waagent[1873]: 2025-05-13T23:42:23.459933Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 13 23:42:23.466354 waagent[1873]: 2025-05-13T23:42:23.466310Z INFO Daemon Daemon Primary interface is [eth0] May 13 23:42:23.477929 systemd-networkd[1332]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:23.477937 systemd-networkd[1332]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:23.477964 systemd-networkd[1332]: eth0: DHCP lease lost May 13 23:42:23.478829 waagent[1873]: 2025-05-13T23:42:23.478776Z INFO Daemon Daemon Create user account if not exists May 13 23:42:23.484641 waagent[1873]: 2025-05-13T23:42:23.484593Z INFO Daemon Daemon User core already exists, skip useradd May 13 23:42:23.491142 waagent[1873]: 2025-05-13T23:42:23.491089Z INFO Daemon Daemon Configure sudoer May 13 23:42:23.495707 waagent[1873]: 2025-05-13T23:42:23.495664Z INFO Daemon Daemon Configure sshd May 13 23:42:23.500512 waagent[1873]: 2025-05-13T23:42:23.500417Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 13 23:42:23.512648 waagent[1873]: 2025-05-13T23:42:23.512607Z INFO Daemon Daemon Deploy ssh public key. May 13 23:42:23.519373 systemd-networkd[1332]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:42:24.805949 waagent[1873]: 2025-05-13T23:42:24.805886Z INFO Daemon Daemon Provisioning complete May 13 23:42:24.823459 waagent[1873]: 2025-05-13T23:42:24.823419Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 13 23:42:24.829496 waagent[1873]: 2025-05-13T23:42:24.829453Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 13 23:42:24.838790 waagent[1873]: 2025-05-13T23:42:24.838752Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 13 23:42:24.969037 waagent[1972]: 2025-05-13T23:42:24.968501Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 13 23:42:24.969037 waagent[1972]: 2025-05-13T23:42:24.968641Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 May 13 23:42:24.969037 waagent[1972]: 2025-05-13T23:42:24.968686Z INFO ExtHandler ExtHandler Python: 3.11.11 May 13 23:42:24.969037 waagent[1972]: 2025-05-13T23:42:24.968735Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 13 23:42:26.068085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:42:26.069723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:27.134855 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:42:27.918693 waagent[1972]: 2025-05-13T23:42:27.786557Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 13 23:42:27.135949 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:43920.service - OpenSSH per-connection server daemon (10.200.16.10:43920). May 13 23:42:27.918929 waagent[1972]: 2025-05-13T23:42:27.918739Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:27.918929 waagent[1972]: 2025-05-13T23:42:27.918843Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:27.930896 waagent[1972]: 2025-05-13T23:42:27.930840Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:42:27.938761 waagent[1972]: 2025-05-13T23:42:27.938726Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 13 23:42:27.939267 waagent[1972]: 2025-05-13T23:42:27.939208Z INFO ExtHandler May 13 23:42:27.939339 waagent[1972]: 2025-05-13T23:42:27.939315Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9bd1af83-1993-47f2-8151-34b1417a3697 eTag: 2900377458648342588 source: Fabric] May 13 23:42:27.939619 waagent[1972]: 2025-05-13T23:42:27.939585Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 13 23:42:27.940120 waagent[1972]: 2025-05-13T23:42:27.940085Z INFO ExtHandler May 13 23:42:27.940174 waagent[1972]: 2025-05-13T23:42:27.940151Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 13 23:42:27.943909 waagent[1972]: 2025-05-13T23:42:27.943868Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 13 23:42:28.474204 waagent[1972]: 2025-05-13T23:42:28.472354Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A689E57C5ADCE282E97290043785570CDBE7D796', 'hasPrivateKey': False} May 13 23:42:28.474204 waagent[1972]: 2025-05-13T23:42:28.472920Z INFO ExtHandler Downloaded certificate {'thumbprint': '0B578C39987D18560EC99028CCCD5492DCF1CA03', 'hasPrivateKey': True} May 13 23:42:28.474204 waagent[1972]: 2025-05-13T23:42:28.473339Z INFO ExtHandler Fetch goal state completed May 13 23:42:28.491324 waagent[1972]: 2025-05-13T23:42:28.491261Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 13 23:42:28.495824 waagent[1972]: 2025-05-13T23:42:28.495764Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1972 May 13 23:42:28.495960 waagent[1972]: 2025-05-13T23:42:28.495927Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 13 23:42:28.496319 waagent[1972]: 2025-05-13T23:42:28.496285Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 13 23:42:28.497762 waagent[1972]: 2025-05-13T23:42:28.497724Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 13 23:42:28.498149 waagent[1972]: 2025-05-13T23:42:28.498115Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 13 23:42:28.498311 waagent[1972]: 2025-05-13T23:42:28.498281Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 13 23:42:28.498880 waagent[1972]: 2025-05-13T23:42:28.498846Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 13 23:42:29.631889 waagent[1972]: 2025-05-13T23:42:29.631848Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 13 23:42:29.632193 waagent[1972]: 2025-05-13T23:42:29.632041Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 13 23:42:29.637632 waagent[1972]: 2025-05-13T23:42:29.637591Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 13 23:42:29.643392 systemd[1]: Reload requested from client PID 1998 ('systemctl') (unit waagent.service)... May 13 23:42:29.643626 systemd[1]: Reloading... May 13 23:42:29.726310 zram_generator::config[2037]: No configuration found. May 13 23:42:29.832925 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:29.934813 systemd[1]: Reloading finished in 290 ms. May 13 23:42:29.952743 waagent[1972]: 2025-05-13T23:42:29.951309Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 13 23:42:29.952743 waagent[1972]: 2025-05-13T23:42:29.951452Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 13 23:42:30.035845 sshd[1984]: Accepted publickey for core from 10.200.16.10 port 43920 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:30.037308 sshd-session[1984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:30.041316 systemd-logind[1726]: New session 3 of user core. May 13 23:42:30.050358 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:42:30.485162 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:44464.service - OpenSSH per-connection server daemon (10.200.16.10:44464). May 13 23:42:30.954379 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 44464 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:30.955664 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:30.959767 systemd-logind[1726]: New session 4 of user core. May 13 23:42:30.967346 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:42:31.280839 sshd[2101]: Connection closed by 10.200.16.10 port 44464 May 13 23:42:31.281363 sshd-session[2099]: pam_unix(sshd:session): session closed for user core May 13 23:42:31.284835 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:44464.service: Deactivated successfully. May 13 23:42:31.286414 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:42:31.287073 systemd-logind[1726]: Session 4 logged out. Waiting for processes to exit. May 13 23:42:31.287898 systemd-logind[1726]: Removed session 4. May 13 23:42:31.362416 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:44478.service - OpenSSH per-connection server daemon (10.200.16.10:44478). May 13 23:42:31.818663 sshd[2108]: Accepted publickey for core from 10.200.16.10 port 44478 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:31.820040 sshd-session[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:31.825275 systemd-logind[1726]: New session 5 of user core. May 13 23:42:31.827340 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:42:32.140554 sshd[2111]: Connection closed by 10.200.16.10 port 44478 May 13 23:42:32.140467 sshd-session[2108]: pam_unix(sshd:session): session closed for user core May 13 23:42:32.143145 systemd-logind[1726]: Session 5 logged out. Waiting for processes to exit. May 13 23:42:32.143318 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:44478.service: Deactivated successfully. May 13 23:42:32.144878 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:42:32.147753 systemd-logind[1726]: Removed session 5. May 13 23:42:32.228390 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:44482.service - OpenSSH per-connection server daemon (10.200.16.10:44482). May 13 23:42:32.681248 sshd[2117]: Accepted publickey for core from 10.200.16.10 port 44482 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:32.682500 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:32.687880 systemd-logind[1726]: New session 6 of user core. May 13 23:42:32.692401 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:42:33.371154 sshd[2119]: Connection closed by 10.200.16.10 port 44482 May 13 23:42:33.370938 sshd-session[2117]: pam_unix(sshd:session): session closed for user core May 13 23:42:33.111284 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:44494.service - OpenSSH per-connection server daemon (10.200.16.10:44494). May 13 23:42:33.375710 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:44482.service: Deactivated successfully. May 13 23:42:33.377110 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:42:33.378564 systemd-logind[1726]: Session 6 logged out. Waiting for processes to exit. May 13 23:42:33.379443 systemd-logind[1726]: Removed session 6. May 13 23:42:33.565527 sshd[2122]: Accepted publickey for core from 10.200.16.10 port 44494 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:33.566756 sshd-session[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:33.571230 systemd-logind[1726]: New session 7 of user core. May 13 23:42:33.576347 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:42:34.245570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:34.251529 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:34.289980 kubelet[2134]: E0513 23:42:34.289905 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:34.292814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:34.292956 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:34.293403 systemd[1]: kubelet.service: Consumed 131ms CPU time, 95M memory peak. May 13 23:42:34.329130 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:42:34.329421 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:34.359468 sudo[2128]: pam_unix(sudo:session): session closed for user root May 13 23:42:34.448638 sshd[2127]: Connection closed by 10.200.16.10 port 44494 May 13 23:42:34.448493 sshd-session[2122]: pam_unix(sshd:session): session closed for user core May 13 23:42:34.452102 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:44494.service: Deactivated successfully. May 13 23:42:34.453835 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:42:34.455745 systemd-logind[1726]: Session 7 logged out. Waiting for processes to exit. May 13 23:42:34.456949 systemd-logind[1726]: Removed session 7. May 13 23:42:34.461187 waagent[1972]: 2025-05-13T23:42:34.461127Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 13 23:42:34.461591 waagent[1972]: 2025-05-13T23:42:34.461547Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 13 23:42:34.462348 waagent[1972]: 2025-05-13T23:42:34.462271Z INFO ExtHandler ExtHandler Starting env monitor service. May 13 23:42:34.462693 waagent[1972]: 2025-05-13T23:42:34.462614Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 13 23:42:34.463719 waagent[1972]: 2025-05-13T23:42:34.463001Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:34.463719 waagent[1972]: 2025-05-13T23:42:34.463074Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:34.463719 waagent[1972]: 2025-05-13T23:42:34.463290Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 13 23:42:34.463719 waagent[1972]: 2025-05-13T23:42:34.463465Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 13 23:42:34.463719 waagent[1972]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 13 23:42:34.463719 waagent[1972]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 13 23:42:34.463719 waagent[1972]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 13 23:42:34.463719 waagent[1972]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:34.463719 waagent[1972]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:34.463719 waagent[1972]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:34.464028 waagent[1972]: 2025-05-13T23:42:34.463970Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 13 23:42:34.464292 waagent[1972]: 2025-05-13T23:42:34.464253Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:34.464403 waagent[1972]: 2025-05-13T23:42:34.464372Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 13 23:42:34.464792 waagent[1972]: 2025-05-13T23:42:34.464741Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 13 23:42:34.464874 waagent[1972]: 2025-05-13T23:42:34.464848Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 13 23:42:34.464992 waagent[1972]: 2025-05-13T23:42:34.464958Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 13 23:42:34.465154 waagent[1972]: 2025-05-13T23:42:34.465117Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:34.465581 waagent[1972]: 2025-05-13T23:42:34.465529Z INFO EnvHandler ExtHandler Configure routes May 13 23:42:34.466072 waagent[1972]: 2025-05-13T23:42:34.466039Z INFO EnvHandler ExtHandler Gateway:None May 13 23:42:34.466721 waagent[1972]: 2025-05-13T23:42:34.466686Z INFO EnvHandler ExtHandler Routes:None May 13 23:42:34.478563 waagent[1972]: 2025-05-13T23:42:34.478523Z INFO ExtHandler ExtHandler May 13 23:42:34.478773 waagent[1972]: 2025-05-13T23:42:34.478742Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 17a0e244-f728-4d23-b043-19dbab21317f correlation 761509ca-7cf1-443f-ba32-74d821c589b6 created: 2025-05-13T23:41:10.253159Z] May 13 23:42:34.479231 waagent[1972]: 2025-05-13T23:42:34.479188Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 13 23:42:34.480235 waagent[1972]: 2025-05-13T23:42:34.480154Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 13 23:42:34.511254 waagent[1972]: 2025-05-13T23:42:34.510788Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AD0FBCF0-AE51-429F-A33A-F72517F5F95B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 13 23:42:34.529915 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:44500.service - OpenSSH per-connection server daemon (10.200.16.10:44500). May 13 23:42:34.535249 waagent[1972]: 2025-05-13T23:42:34.534896Z INFO MonitorHandler ExtHandler Network interfaces: May 13 23:42:34.535249 waagent[1972]: Executing ['ip', '-a', '-o', 'link']: May 13 23:42:34.535249 waagent[1972]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 13 23:42:34.535249 waagent[1972]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:eb:b7 brd ff:ff:ff:ff:ff:ff May 13 23:42:34.535249 waagent[1972]: 3: enP26479s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:eb:b7 brd ff:ff:ff:ff:ff:ff\ altname enP26479p0s2 May 13 23:42:34.535249 waagent[1972]: Executing ['ip', '-4', '-a', '-o', 'address']: May 13 23:42:34.535249 waagent[1972]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 13 23:42:34.535249 waagent[1972]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 13 23:42:34.535249 waagent[1972]: Executing ['ip', '-6', '-a', '-o', 'address']: May 13 23:42:34.535249 waagent[1972]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 13 23:42:34.535249 waagent[1972]: 2: eth0 inet6 fe80::20d:3aff:fef9:ebb7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:42:34.535249 waagent[1972]: 3: enP26479s1 inet6 fe80::20d:3aff:fef9:ebb7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:42:34.603140 waagent[1972]: 2025-05-13T23:42:34.602377Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 13 23:42:34.603140 waagent[1972]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:34.603140 waagent[1972]: pkts bytes target prot opt in out source destination May 13 23:42:34.603140 waagent[1972]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:34.603140 waagent[1972]: pkts bytes target prot opt in out source destination May 13 23:42:34.603140 waagent[1972]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:34.603140 waagent[1972]: pkts bytes target prot opt in out source destination May 13 23:42:34.603140 waagent[1972]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:42:34.603140 waagent[1972]: 6 520 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:42:34.603140 waagent[1972]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:42:34.605381 waagent[1972]: 2025-05-13T23:42:34.605341Z INFO EnvHandler ExtHandler Current Firewall rules: May 13 23:42:34.605381 waagent[1972]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:34.605381 waagent[1972]: pkts bytes target prot opt in out source destination May 13 23:42:34.605381 waagent[1972]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:34.605381 waagent[1972]: pkts bytes target prot opt in out source destination May 13 23:42:34.605381 waagent[1972]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:34.605381 waagent[1972]: pkts bytes target prot opt in out source destination May 13 23:42:34.605381 waagent[1972]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:42:34.605381 waagent[1972]: 10 868 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:42:34.605381 waagent[1972]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:42:34.606075 waagent[1972]: 2025-05-13T23:42:34.606048Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 13 23:42:34.989366 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 44500 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:34.990669 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:34.996130 systemd-logind[1726]: New session 8 of user core. May 13 23:42:34.998358 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:42:35.243830 sudo[2180]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:42:35.244416 sudo[2180]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:35.247496 sudo[2180]: pam_unix(sudo:session): session closed for user root May 13 23:42:35.252034 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:42:35.252592 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:35.260622 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:35.291899 augenrules[2202]: No rules May 13 23:42:35.293349 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:35.293651 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:35.296412 sudo[2179]: pam_unix(sudo:session): session closed for user root May 13 23:42:35.366241 sshd[2178]: Connection closed by 10.200.16.10 port 44500 May 13 23:42:35.366725 sshd-session[2161]: pam_unix(sshd:session): session closed for user core May 13 23:42:35.369970 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:44500.service: Deactivated successfully. May 13 23:42:35.371617 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:42:35.373285 systemd-logind[1726]: Session 8 logged out. Waiting for processes to exit. May 13 23:42:35.374144 systemd-logind[1726]: Removed session 8. May 13 23:42:35.462286 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:44510.service - OpenSSH per-connection server daemon (10.200.16.10:44510). May 13 23:42:35.922601 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 44510 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:42:35.923842 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:35.928298 systemd-logind[1726]: New session 9 of user core. May 13 23:42:35.934371 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:42:36.180016 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:42:36.180476 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:38.000273 chronyd[1713]: Selected source PHC0 May 13 23:42:40.300397 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:42:40.313481 (dockerd)[2232]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:42:42.815710 dockerd[2232]: time="2025-05-13T23:42:42.815652228Z" level=info msg="Starting up" May 13 23:42:42.816643 dockerd[2232]: time="2025-05-13T23:42:42.816605148Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:42:44.385084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:42:44.386752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:46.558371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:46.566445 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:46.599871 kubelet[2260]: E0513 23:42:46.599818 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:46.602249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:46.602493 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:46.602966 systemd[1]: kubelet.service: Consumed 121ms CPU time, 97M memory peak. May 13 23:42:51.379677 dockerd[2232]: time="2025-05-13T23:42:51.379449906Z" level=info msg="Loading containers: start." May 13 23:42:52.452238 kernel: Initializing XFRM netlink socket May 13 23:42:52.546753 systemd-networkd[1332]: docker0: Link UP May 13 23:42:52.620282 dockerd[2232]: time="2025-05-13T23:42:52.620232172Z" level=info msg="Loading containers: done." May 13 23:42:52.697561 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2297534127-merged.mount: Deactivated successfully. May 13 23:42:53.224964 dockerd[2232]: time="2025-05-13T23:42:53.224907950Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:42:53.225155 dockerd[2232]: time="2025-05-13T23:42:53.225014950Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:42:53.225155 dockerd[2232]: time="2025-05-13T23:42:53.225139710Z" level=info msg="Daemon has completed initialization" May 13 23:42:53.436411 dockerd[2232]: time="2025-05-13T23:42:53.436260192Z" level=info msg="API listen on /run/docker.sock" May 13 23:42:53.436685 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:42:54.284750 containerd[1765]: time="2025-05-13T23:42:54.284588154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:42:56.247415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362860962.mount: Deactivated successfully. May 13 23:42:56.635140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 23:42:56.636741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:56.748559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:56.758446 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:56.790604 kubelet[2456]: E0513 23:42:56.790552 2456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:56.792951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:56.793194 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:56.793712 systemd[1]: kubelet.service: Consumed 119ms CPU time, 96.3M memory peak. May 13 23:42:59.928260 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 13 23:42:59.928382 update_engine[1728]: I20250513 23:42:59.282933 1728 update_attempter.cc:509] Updating boot flags... May 13 23:42:59.970318 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2477) May 13 23:43:00.084463 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2480) May 13 23:43:00.191233 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2480) May 13 23:43:06.166096 containerd[1765]: time="2025-05-13T23:43:06.166044532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:06.172044 containerd[1765]: time="2025-05-13T23:43:06.171993222Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 13 23:43:06.230595 containerd[1765]: time="2025-05-13T23:43:06.230537443Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:06.241176 containerd[1765]: time="2025-05-13T23:43:06.241110142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:06.242396 containerd[1765]: time="2025-05-13T23:43:06.241988903Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 11.957290628s" May 13 23:43:06.242396 containerd[1765]: time="2025-05-13T23:43:06.242024663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 23:43:06.242642 containerd[1765]: time="2025-05-13T23:43:06.242602584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:43:06.885157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 23:43:06.886945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:07.011356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:07.014717 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:07.047013 kubelet[2685]: E0513 23:43:07.046936 2685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:07.049472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:07.049614 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:07.049980 systemd[1]: kubelet.service: Consumed 117ms CPU time, 94.4M memory peak. May 13 23:43:13.552308 containerd[1765]: time="2025-05-13T23:43:13.552258501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:13.558681 containerd[1765]: time="2025-05-13T23:43:13.558625349Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 13 23:43:13.564332 containerd[1765]: time="2025-05-13T23:43:13.564283316Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:13.579303 containerd[1765]: time="2025-05-13T23:43:13.579240336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:13.580114 containerd[1765]: time="2025-05-13T23:43:13.580074017Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 7.337371713s" May 13 23:43:13.580173 containerd[1765]: time="2025-05-13T23:43:13.580115177Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 23:43:13.580761 containerd[1765]: time="2025-05-13T23:43:13.580567377Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:43:14.836641 containerd[1765]: time="2025-05-13T23:43:14.836587167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:14.840229 containerd[1765]: time="2025-05-13T23:43:14.840184092Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 13 23:43:14.847318 containerd[1765]: time="2025-05-13T23:43:14.847263461Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:14.853663 containerd[1765]: time="2025-05-13T23:43:14.853605589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:14.857435 containerd[1765]: time="2025-05-13T23:43:14.856040712Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.275441494s" May 13 23:43:14.857435 containerd[1765]: time="2025-05-13T23:43:14.856082152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 23:43:14.857659 containerd[1765]: time="2025-05-13T23:43:14.857632034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:43:16.091387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282852217.mount: Deactivated successfully. May 13 23:43:16.725747 containerd[1765]: time="2025-05-13T23:43:16.725646498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:17.135133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 13 23:43:17.136709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:17.366869 containerd[1765]: time="2025-05-13T23:43:17.366794490Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 13 23:43:18.536111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:18.547451 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:18.580305 kubelet[2716]: E0513 23:43:18.580255 2716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:18.582625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:18.582775 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:18.583273 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94M memory peak. May 13 23:43:22.672943 containerd[1765]: time="2025-05-13T23:43:22.672893021Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:24.917423 containerd[1765]: time="2025-05-13T23:43:24.917337482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:24.917984 containerd[1765]: time="2025-05-13T23:43:24.917810922Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 10.060141528s" May 13 23:43:24.917984 containerd[1765]: time="2025-05-13T23:43:24.917857962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 23:43:24.918533 containerd[1765]: time="2025-05-13T23:43:24.918465883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:43:26.593695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243256885.mount: Deactivated successfully. May 13 23:43:28.635140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 13 23:43:28.638392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:28.750428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:28.762667 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:28.806002 kubelet[2743]: E0513 23:43:28.805890 2743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:28.807878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:28.808005 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:28.809641 systemd[1]: kubelet.service: Consumed 127ms CPU time, 94.2M memory peak. May 13 23:43:34.811824 containerd[1765]: time="2025-05-13T23:43:34.811766835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:34.816294 containerd[1765]: time="2025-05-13T23:43:34.816230520Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 13 23:43:34.822580 containerd[1765]: time="2025-05-13T23:43:34.822528527Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:34.828743 containerd[1765]: time="2025-05-13T23:43:34.828673774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:34.832008 containerd[1765]: time="2025-05-13T23:43:34.831073457Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 9.912461854s" May 13 23:43:34.832008 containerd[1765]: time="2025-05-13T23:43:34.831113217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:43:34.832554 containerd[1765]: time="2025-05-13T23:43:34.832526179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:43:35.612235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194887674.mount: Deactivated successfully. May 13 23:43:35.645601 containerd[1765]: time="2025-05-13T23:43:35.645546884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:43:35.648361 containerd[1765]: time="2025-05-13T23:43:35.648312167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 13 23:43:35.655412 containerd[1765]: time="2025-05-13T23:43:35.655364575Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:43:35.662956 containerd[1765]: time="2025-05-13T23:43:35.662909304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:43:35.663615 containerd[1765]: time="2025-05-13T23:43:35.663491344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 830.848605ms" May 13 23:43:35.663615 containerd[1765]: time="2025-05-13T23:43:35.663525544Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:43:35.664261 containerd[1765]: time="2025-05-13T23:43:35.664097065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:43:36.833993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184533304.mount: Deactivated successfully. May 13 23:43:38.885177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 13 23:43:38.888549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:43.306690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:43.312675 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:43.344406 kubelet[2807]: E0513 23:43:43.344365 2807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:43.346666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:43.346816 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:43.347337 systemd[1]: kubelet.service: Consumed 123ms CPU time, 94.3M memory peak. May 13 23:43:48.872491 containerd[1765]: time="2025-05-13T23:43:48.872437821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:48.916419 containerd[1765]: time="2025-05-13T23:43:48.916357277Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 13 23:43:48.922558 containerd[1765]: time="2025-05-13T23:43:48.922509405Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:48.986236 containerd[1765]: time="2025-05-13T23:43:48.986155485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:48.987358 containerd[1765]: time="2025-05-13T23:43:48.987235886Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 13.323106421s" May 13 23:43:48.987358 containerd[1765]: time="2025-05-13T23:43:48.987267607Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 23:43:53.385901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 13 23:43:53.389415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:53.917403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:53.925761 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:53.966593 kubelet[2882]: E0513 23:43:53.964387 2882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:53.967394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:53.967525 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:53.967802 systemd[1]: kubelet.service: Consumed 126ms CPU time, 94.1M memory peak. May 13 23:43:54.173408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:54.173544 systemd[1]: kubelet.service: Consumed 126ms CPU time, 94.1M memory peak. May 13 23:43:54.175847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:54.213809 systemd[1]: Reload requested from client PID 2896 ('systemctl') (unit session-9.scope)... May 13 23:43:54.213956 systemd[1]: Reloading... May 13 23:43:54.335795 zram_generator::config[2946]: No configuration found. May 13 23:43:54.434125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:43:54.537983 systemd[1]: Reloading finished in 323 ms. May 13 23:43:54.574989 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:43:54.575061 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:43:54.575316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:54.575377 systemd[1]: kubelet.service: Consumed 84ms CPU time, 82.4M memory peak. May 13 23:43:54.578983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:54.700684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:54.714466 (kubelet)[3011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:43:54.750031 kubelet[3011]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:54.750031 kubelet[3011]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:43:54.750031 kubelet[3011]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:54.750031 kubelet[3011]: I0513 23:43:54.749916 3011 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:43:55.574243 kubelet[3011]: I0513 23:43:55.573259 3011 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:43:55.574243 kubelet[3011]: I0513 23:43:55.573289 3011 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:43:55.574243 kubelet[3011]: I0513 23:43:55.573517 3011 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:43:55.593608 kubelet[3011]: E0513 23:43:55.593575 3011 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:55.595801 kubelet[3011]: I0513 23:43:55.595753 3011 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:43:55.603961 kubelet[3011]: I0513 23:43:55.603935 3011 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:43:55.607749 kubelet[3011]: I0513 23:43:55.607722 3011 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:43:55.608425 kubelet[3011]: I0513 23:43:55.608401 3011 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:43:55.608589 kubelet[3011]: I0513 23:43:55.608558 3011 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:43:55.608778 kubelet[3011]: I0513 23:43:55.608589 3011 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-13ce75130c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:43:55.608870 kubelet[3011]: I0513 23:43:55.608787 3011 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:43:55.608870 kubelet[3011]: I0513 23:43:55.608798 3011 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:43:55.608939 kubelet[3011]: I0513 23:43:55.608918 3011 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:55.610749 kubelet[3011]: I0513 23:43:55.610466 3011 kubelet.go:408] "Attempting to sync node with API server" May 13 23:43:55.610749 kubelet[3011]: I0513 23:43:55.610501 3011 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:43:55.610749 kubelet[3011]: I0513 23:43:55.610529 3011 kubelet.go:314] "Adding apiserver pod source" May 13 23:43:55.610749 kubelet[3011]: I0513 23:43:55.610539 3011 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:43:55.612751 kubelet[3011]: W0513 23:43:55.612693 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:55.612839 kubelet[3011]: E0513 23:43:55.612756 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:55.613276 kubelet[3011]: W0513 23:43:55.613071 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:55.613276 kubelet[3011]: E0513 23:43:55.613120 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:55.614850 kubelet[3011]: I0513 23:43:55.613489 3011 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:43:55.615011 kubelet[3011]: I0513 23:43:55.614981 3011 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:43:55.616346 kubelet[3011]: W0513 23:43:55.615904 3011 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:43:55.617234 kubelet[3011]: I0513 23:43:55.617117 3011 server.go:1269] "Started kubelet" May 13 23:43:55.618117 kubelet[3011]: I0513 23:43:55.618082 3011 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:43:55.619194 kubelet[3011]: I0513 23:43:55.618920 3011 server.go:460] "Adding debug handlers to kubelet server" May 13 23:43:55.619936 kubelet[3011]: I0513 23:43:55.619889 3011 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:43:55.620290 kubelet[3011]: I0513 23:43:55.620273 3011 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:43:55.622547 kubelet[3011]: I0513 23:43:55.621090 3011 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:43:55.622547 kubelet[3011]: E0513 23:43:55.620491 3011 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-13ce75130c.183f3acdb4aeaea3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-13ce75130c,UID:ci-4284.0.0-n-13ce75130c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-13ce75130c,},FirstTimestamp:2025-05-13 23:43:55.617095331 +0000 UTC m=+0.900001216,LastTimestamp:2025-05-13 23:43:55.617095331 +0000 UTC m=+0.900001216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-13ce75130c,}" May 13 23:43:55.622547 kubelet[3011]: I0513 23:43:55.621743 3011 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:43:55.625970 kubelet[3011]: I0513 23:43:55.625951 3011 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:43:55.626167 kubelet[3011]: I0513 23:43:55.626153 3011 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:43:55.626372 kubelet[3011]: I0513 23:43:55.626359 3011 reconciler.go:26] "Reconciler: start to sync state" May 13 23:43:55.626849 kubelet[3011]: W0513 23:43:55.626816 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:55.626976 kubelet[3011]: E0513 23:43:55.626959 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:55.627131 kubelet[3011]: E0513 23:43:55.627117 3011 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:43:55.627384 kubelet[3011]: I0513 23:43:55.627369 3011 factory.go:221] Registration of the systemd container factory successfully May 13 23:43:55.627530 kubelet[3011]: I0513 23:43:55.627513 3011 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:43:55.627901 kubelet[3011]: E0513 23:43:55.627378 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:43:55.629105 kubelet[3011]: E0513 23:43:55.629079 3011 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-13ce75130c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" May 13 23:43:55.629526 kubelet[3011]: I0513 23:43:55.629511 3011 factory.go:221] Registration of the containerd container factory successfully May 13 23:43:55.647120 kubelet[3011]: I0513 23:43:55.646894 3011 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:43:55.647120 kubelet[3011]: I0513 23:43:55.646910 3011 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:43:55.647120 kubelet[3011]: I0513 23:43:55.646928 3011 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:55.652178 kubelet[3011]: I0513 23:43:55.652102 3011 policy_none.go:49] "None policy: Start" May 13 23:43:55.653028 kubelet[3011]: I0513 23:43:55.652732 3011 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:43:55.653028 kubelet[3011]: I0513 23:43:55.652762 3011 state_mem.go:35] "Initializing new in-memory state store" May 13 23:43:55.666562 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:43:55.679626 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:43:55.682713 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:43:55.686066 kubelet[3011]: I0513 23:43:55.686039 3011 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:43:55.686572 kubelet[3011]: I0513 23:43:55.686248 3011 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:43:55.686572 kubelet[3011]: I0513 23:43:55.686258 3011 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:43:55.686572 kubelet[3011]: I0513 23:43:55.686499 3011 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:43:55.687966 kubelet[3011]: E0513 23:43:55.687945 3011 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:43:55.692813 kubelet[3011]: I0513 23:43:55.692782 3011 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:43:55.693958 kubelet[3011]: I0513 23:43:55.693940 3011 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:43:55.694046 kubelet[3011]: I0513 23:43:55.694037 3011 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:43:55.694100 kubelet[3011]: I0513 23:43:55.694092 3011 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:43:55.694282 kubelet[3011]: E0513 23:43:55.694171 3011 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 23:43:55.695513 kubelet[3011]: W0513 23:43:55.695472 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:55.696015 kubelet[3011]: E0513 23:43:55.695894 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:55.787993 kubelet[3011]: I0513 23:43:55.787931 3011 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:55.788396 kubelet[3011]: E0513 23:43:55.788326 3011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:55.803399 systemd[1]: Created slice kubepods-burstable-podf521cd8d04529047a06d14ad6a23f47a.slice - libcontainer container kubepods-burstable-podf521cd8d04529047a06d14ad6a23f47a.slice. May 13 23:43:55.821103 systemd[1]: Created slice kubepods-burstable-pod09580dda23cdcc28e89d9d307dd2356d.slice - libcontainer container kubepods-burstable-pod09580dda23cdcc28e89d9d307dd2356d.slice. May 13 23:43:55.827522 kubelet[3011]: I0513 23:43:55.827261 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827522 kubelet[3011]: I0513 23:43:55.827291 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827522 kubelet[3011]: I0513 23:43:55.827309 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09580dda23cdcc28e89d9d307dd2356d-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-13ce75130c\" (UID: \"09580dda23cdcc28e89d9d307dd2356d\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827522 kubelet[3011]: I0513 23:43:55.827324 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827522 kubelet[3011]: I0513 23:43:55.827339 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827688 kubelet[3011]: I0513 23:43:55.827356 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827688 kubelet[3011]: I0513 23:43:55.827370 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/434f0aeec8f23baadc0dc024d40489c9-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-13ce75130c\" (UID: \"434f0aeec8f23baadc0dc024d40489c9\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827688 kubelet[3011]: I0513 23:43:55.827384 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/434f0aeec8f23baadc0dc024d40489c9-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-13ce75130c\" (UID: \"434f0aeec8f23baadc0dc024d40489c9\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.827688 kubelet[3011]: I0513 23:43:55.827399 3011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/434f0aeec8f23baadc0dc024d40489c9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-13ce75130c\" (UID: \"434f0aeec8f23baadc0dc024d40489c9\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-13ce75130c" May 13 23:43:55.830036 kubelet[3011]: E0513 23:43:55.829995 3011 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-13ce75130c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" May 13 23:43:55.834505 systemd[1]: Created slice kubepods-burstable-pod434f0aeec8f23baadc0dc024d40489c9.slice - libcontainer container kubepods-burstable-pod434f0aeec8f23baadc0dc024d40489c9.slice. May 13 23:43:55.990925 kubelet[3011]: I0513 23:43:55.990802 3011 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:55.991305 kubelet[3011]: E0513 23:43:55.991270 3011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:56.118839 containerd[1765]: time="2025-05-13T23:43:56.118732003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-13ce75130c,Uid:f521cd8d04529047a06d14ad6a23f47a,Namespace:kube-system,Attempt:0,}" May 13 23:43:56.132769 containerd[1765]: time="2025-05-13T23:43:56.132581301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-13ce75130c,Uid:09580dda23cdcc28e89d9d307dd2356d,Namespace:kube-system,Attempt:0,}" May 13 23:43:56.136925 containerd[1765]: time="2025-05-13T23:43:56.136777346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-13ce75130c,Uid:434f0aeec8f23baadc0dc024d40489c9,Namespace:kube-system,Attempt:0,}" May 13 23:43:56.230870 kubelet[3011]: E0513 23:43:56.230825 3011 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-13ce75130c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" May 13 23:43:56.393678 kubelet[3011]: I0513 23:43:56.393626 3011 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:56.394028 kubelet[3011]: E0513 23:43:56.393985 3011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:56.650201 kubelet[3011]: W0513 23:43:56.650071 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:56.650201 kubelet[3011]: E0513 23:43:56.650139 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:56.822540 kubelet[3011]: W0513 23:43:56.822500 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:56.822875 kubelet[3011]: E0513 23:43:56.822547 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:56.832624 kubelet[3011]: W0513 23:43:56.832570 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:56.832709 kubelet[3011]: E0513 23:43:56.832631 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:57.032142 kubelet[3011]: E0513 23:43:57.032026 3011 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-13ce75130c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" May 13 23:43:57.094615 kubelet[3011]: W0513 23:43:57.094558 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:57.094710 kubelet[3011]: E0513 23:43:57.094624 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:57.195922 kubelet[3011]: I0513 23:43:57.195893 3011 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:57.196192 kubelet[3011]: E0513 23:43:57.196162 3011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:57.710679 kubelet[3011]: E0513 23:43:57.710641 3011 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:58.625527 kubelet[3011]: W0513 23:43:58.625441 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:58.625527 kubelet[3011]: E0513 23:43:58.625489 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:58.633406 kubelet[3011]: E0513 23:43:58.633366 3011 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-13ce75130c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="3.2s" May 13 23:43:58.798642 kubelet[3011]: I0513 23:43:58.798602 3011 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:58.799085 kubelet[3011]: E0513 23:43:58.799054 3011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-n-13ce75130c" May 13 23:43:59.042494 kubelet[3011]: W0513 23:43:59.042428 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:43:59.042494 kubelet[3011]: E0513 23:43:59.042470 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:59.427937 kubelet[3011]: W0513 23:43:59.427901 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:44:00.278287 kubelet[3011]: E0513 23:43:59.427944 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:00.278287 kubelet[3011]: W0513 23:43:59.808831 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:44:00.278287 kubelet[3011]: E0513 23:43:59.808870 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:01.834180 kubelet[3011]: E0513 23:44:01.834132 3011 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-13ce75130c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="6.4s" May 13 23:44:04.280749 kubelet[3011]: I0513 23:44:02.001092 3011 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:04.280749 kubelet[3011]: E0513 23:44:02.001406 3011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:04.280749 kubelet[3011]: E0513 23:44:02.088412 3011 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:04.280749 kubelet[3011]: W0513 23:44:03.366821 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:44:04.280749 kubelet[3011]: E0513 23:44:03.366862 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:04.280749 kubelet[3011]: W0513 23:44:03.580029 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:44:04.281364 kubelet[3011]: E0513 23:44:03.580066 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-13ce75130c&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:04.281364 kubelet[3011]: W0513 23:44:04.148652 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:44:04.281364 kubelet[3011]: E0513 23:44:04.148690 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:04.427156 kubelet[3011]: W0513 23:44:04.427076 3011 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 13 23:44:04.427156 kubelet[3011]: E0513 23:44:04.427123 3011 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:05.321828 kubelet[3011]: E0513 23:44:05.321723 3011 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-13ce75130c.183f3acdb4aeaea3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-13ce75130c,UID:ci-4284.0.0-n-13ce75130c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-13ce75130c,},FirstTimestamp:2025-05-13 23:43:55.617095331 +0000 UTC m=+0.900001216,LastTimestamp:2025-05-13 23:43:55.617095331 +0000 UTC m=+0.900001216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-13ce75130c,}" May 13 23:44:05.688493 kubelet[3011]: E0513 23:44:05.688455 3011 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:06.535181 containerd[1765]: time="2025-05-13T23:44:06.535141181Z" level=info msg="connecting to shim de38fdd5fd9583167bf087573190f9647b7d16c9e4908ffc285765ed320e1413" address="unix:///run/containerd/s/e896c6cbc649d990a637e6b1f5009e66289525a956c42294ec18789de98e5da6" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:06.560411 systemd[1]: Started cri-containerd-de38fdd5fd9583167bf087573190f9647b7d16c9e4908ffc285765ed320e1413.scope - libcontainer container de38fdd5fd9583167bf087573190f9647b7d16c9e4908ffc285765ed320e1413. May 13 23:44:06.593856 containerd[1765]: time="2025-05-13T23:44:06.593371380Z" level=info msg="connecting to shim 40db9756ad4a0b91b075d6abd1c17b46be78927582ff03ce51f911fdf5204030" address="unix:///run/containerd/s/846ecb6a8b383f0ed1a7d6922a21eb46254d88ed50f8fbe51febc76c9587ea6e" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:06.614364 systemd[1]: Started cri-containerd-40db9756ad4a0b91b075d6abd1c17b46be78927582ff03ce51f911fdf5204030.scope - libcontainer container 40db9756ad4a0b91b075d6abd1c17b46be78927582ff03ce51f911fdf5204030. May 13 23:44:06.718939 containerd[1765]: time="2025-05-13T23:44:06.718896790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-13ce75130c,Uid:f521cd8d04529047a06d14ad6a23f47a,Namespace:kube-system,Attempt:0,} returns sandbox id \"de38fdd5fd9583167bf087573190f9647b7d16c9e4908ffc285765ed320e1413\"" May 13 23:44:06.722268 containerd[1765]: time="2025-05-13T23:44:06.721752034Z" level=info msg="CreateContainer within sandbox \"de38fdd5fd9583167bf087573190f9647b7d16c9e4908ffc285765ed320e1413\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:44:06.725133 containerd[1765]: time="2025-05-13T23:44:06.725096518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-13ce75130c,Uid:09580dda23cdcc28e89d9d307dd2356d,Namespace:kube-system,Attempt:0,} returns sandbox id \"40db9756ad4a0b91b075d6abd1c17b46be78927582ff03ce51f911fdf5204030\"" May 13 23:44:06.727203 containerd[1765]: time="2025-05-13T23:44:06.727179081Z" level=info msg="CreateContainer within sandbox \"40db9756ad4a0b91b075d6abd1c17b46be78927582ff03ce51f911fdf5204030\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:44:06.976095 containerd[1765]: time="2025-05-13T23:44:06.976000059Z" level=info msg="connecting to shim 46ce9b7d1d940af07647d012fcbb4c95c17d5099ef965ff5c76e5f2b625fff00" address="unix:///run/containerd/s/6f1e179bb391943cc4b2316f78447a421927cee5c5c5a5b7cb27c841c3ccd8f4" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:06.996384 systemd[1]: Started cri-containerd-46ce9b7d1d940af07647d012fcbb4c95c17d5099ef965ff5c76e5f2b625fff00.scope - libcontainer container 46ce9b7d1d940af07647d012fcbb4c95c17d5099ef965ff5c76e5f2b625fff00. May 13 23:44:07.173251 containerd[1765]: time="2025-05-13T23:44:07.173114687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-13ce75130c,Uid:434f0aeec8f23baadc0dc024d40489c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ce9b7d1d940af07647d012fcbb4c95c17d5099ef965ff5c76e5f2b625fff00\"" May 13 23:44:07.225549 containerd[1765]: time="2025-05-13T23:44:07.225505518Z" level=info msg="CreateContainer within sandbox \"46ce9b7d1d940af07647d012fcbb4c95c17d5099ef965ff5c76e5f2b625fff00\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:44:07.271751 containerd[1765]: time="2025-05-13T23:44:07.271487420Z" level=info msg="Container 397bba676cdd874f8557e940578c941090ba121d900f0250113a3a04a1a28a0c: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:07.385907 containerd[1765]: time="2025-05-13T23:44:07.385858855Z" level=info msg="Container 0a5ce22a44ebae9ee7778508bcb45afcca7a04dd649c1fe89c28828200477578: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:07.729205 containerd[1765]: time="2025-05-13T23:44:07.729162841Z" level=info msg="CreateContainer within sandbox \"de38fdd5fd9583167bf087573190f9647b7d16c9e4908ffc285765ed320e1413\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"397bba676cdd874f8557e940578c941090ba121d900f0250113a3a04a1a28a0c\"" May 13 23:44:07.730252 containerd[1765]: time="2025-05-13T23:44:07.729758682Z" level=info msg="StartContainer for \"397bba676cdd874f8557e940578c941090ba121d900f0250113a3a04a1a28a0c\"" May 13 23:44:07.730890 containerd[1765]: time="2025-05-13T23:44:07.730842324Z" level=info msg="connecting to shim 397bba676cdd874f8557e940578c941090ba121d900f0250113a3a04a1a28a0c" address="unix:///run/containerd/s/e896c6cbc649d990a637e6b1f5009e66289525a956c42294ec18789de98e5da6" protocol=ttrpc version=3 May 13 23:44:07.734686 containerd[1765]: time="2025-05-13T23:44:07.734633169Z" level=info msg="Container a0664c97383030971b1e3e37c9ea89e486705acec813b2a3e3b2494eb0a6ff15: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:07.751358 systemd[1]: Started cri-containerd-397bba676cdd874f8557e940578c941090ba121d900f0250113a3a04a1a28a0c.scope - libcontainer container 397bba676cdd874f8557e940578c941090ba121d900f0250113a3a04a1a28a0c. May 13 23:44:07.870819 containerd[1765]: time="2025-05-13T23:44:07.870539793Z" level=info msg="CreateContainer within sandbox \"40db9756ad4a0b91b075d6abd1c17b46be78927582ff03ce51f911fdf5204030\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a5ce22a44ebae9ee7778508bcb45afcca7a04dd649c1fe89c28828200477578\"" May 13 23:44:07.871863 containerd[1765]: time="2025-05-13T23:44:07.871621635Z" level=info msg="StartContainer for \"0a5ce22a44ebae9ee7778508bcb45afcca7a04dd649c1fe89c28828200477578\"" May 13 23:44:07.872302 containerd[1765]: time="2025-05-13T23:44:07.872195475Z" level=info msg="StartContainer for \"397bba676cdd874f8557e940578c941090ba121d900f0250113a3a04a1a28a0c\" returns successfully" May 13 23:44:07.872884 containerd[1765]: time="2025-05-13T23:44:07.872622836Z" level=info msg="connecting to shim 0a5ce22a44ebae9ee7778508bcb45afcca7a04dd649c1fe89c28828200477578" address="unix:///run/containerd/s/846ecb6a8b383f0ed1a7d6922a21eb46254d88ed50f8fbe51febc76c9587ea6e" protocol=ttrpc version=3 May 13 23:44:07.892427 systemd[1]: Started cri-containerd-0a5ce22a44ebae9ee7778508bcb45afcca7a04dd649c1fe89c28828200477578.scope - libcontainer container 0a5ce22a44ebae9ee7778508bcb45afcca7a04dd649c1fe89c28828200477578. May 13 23:44:07.935305 containerd[1765]: time="2025-05-13T23:44:07.935202521Z" level=info msg="StartContainer for \"0a5ce22a44ebae9ee7778508bcb45afcca7a04dd649c1fe89c28828200477578\" returns successfully" May 13 23:44:07.976641 containerd[1765]: time="2025-05-13T23:44:07.976396697Z" level=info msg="CreateContainer within sandbox \"46ce9b7d1d940af07647d012fcbb4c95c17d5099ef965ff5c76e5f2b625fff00\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0664c97383030971b1e3e37c9ea89e486705acec813b2a3e3b2494eb0a6ff15\"" May 13 23:44:07.981289 containerd[1765]: time="2025-05-13T23:44:07.977756419Z" level=info msg="StartContainer for \"a0664c97383030971b1e3e37c9ea89e486705acec813b2a3e3b2494eb0a6ff15\"" May 13 23:44:07.984674 containerd[1765]: time="2025-05-13T23:44:07.984646948Z" level=info msg="connecting to shim a0664c97383030971b1e3e37c9ea89e486705acec813b2a3e3b2494eb0a6ff15" address="unix:///run/containerd/s/6f1e179bb391943cc4b2316f78447a421927cee5c5c5a5b7cb27c841c3ccd8f4" protocol=ttrpc version=3 May 13 23:44:08.014368 systemd[1]: Started cri-containerd-a0664c97383030971b1e3e37c9ea89e486705acec813b2a3e3b2494eb0a6ff15.scope - libcontainer container a0664c97383030971b1e3e37c9ea89e486705acec813b2a3e3b2494eb0a6ff15. May 13 23:44:08.089678 containerd[1765]: time="2025-05-13T23:44:08.089644691Z" level=info msg="StartContainer for \"a0664c97383030971b1e3e37c9ea89e486705acec813b2a3e3b2494eb0a6ff15\" returns successfully" May 13 23:44:08.403908 kubelet[3011]: I0513 23:44:08.403592 3011 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:09.918154 kubelet[3011]: E0513 23:44:09.918112 3011 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-13ce75130c\" not found" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:10.010671 kubelet[3011]: I0513 23:44:10.010200 3011 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:10.010671 kubelet[3011]: E0513 23:44:10.010243 3011 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4284.0.0-n-13ce75130c\": node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.026812 kubelet[3011]: E0513 23:44:10.026766 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.127312 kubelet[3011]: E0513 23:44:10.127270 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.228185 kubelet[3011]: E0513 23:44:10.228066 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.328436 kubelet[3011]: E0513 23:44:10.328399 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.428907 kubelet[3011]: E0513 23:44:10.428867 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.529814 kubelet[3011]: E0513 23:44:10.529699 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.631236 kubelet[3011]: E0513 23:44:10.631183 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.732451 kubelet[3011]: E0513 23:44:10.732312 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.832634 kubelet[3011]: E0513 23:44:10.832400 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:10.933016 kubelet[3011]: E0513 23:44:10.932980 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.033605 kubelet[3011]: E0513 23:44:11.033566 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.134787 kubelet[3011]: E0513 23:44:11.134743 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.235520 kubelet[3011]: E0513 23:44:11.235437 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.336286 kubelet[3011]: E0513 23:44:11.336248 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.437139 kubelet[3011]: E0513 23:44:11.437034 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.537603 kubelet[3011]: E0513 23:44:11.537562 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.637842 kubelet[3011]: E0513 23:44:11.637804 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.738075 kubelet[3011]: E0513 23:44:11.737973 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.838851 kubelet[3011]: E0513 23:44:11.838790 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:11.939442 kubelet[3011]: E0513 23:44:11.939366 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:12.040009 kubelet[3011]: E0513 23:44:12.039901 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:12.141365 kubelet[3011]: E0513 23:44:12.141323 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:12.163856 systemd[1]: Reload requested from client PID 3275 ('systemctl') (unit session-9.scope)... May 13 23:44:12.163872 systemd[1]: Reloading... May 13 23:44:12.241623 kubelet[3011]: E0513 23:44:12.241575 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:12.258329 zram_generator::config[3325]: No configuration found. May 13 23:44:12.341962 kubelet[3011]: E0513 23:44:12.341692 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:12.358537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:44:12.442419 kubelet[3011]: E0513 23:44:12.442360 3011 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:12.471954 systemd[1]: Reloading finished in 307 ms. May 13 23:44:12.497759 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:44:12.511298 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:44:12.511526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:12.511601 systemd[1]: kubelet.service: Consumed 1.210s CPU time, 114.8M memory peak. May 13 23:44:12.513748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:44:12.691703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:12.700499 (kubelet)[3386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:44:12.748234 kubelet[3386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:44:12.748234 kubelet[3386]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:44:12.748234 kubelet[3386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:44:12.748234 kubelet[3386]: I0513 23:44:12.748204 3386 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:44:12.756528 kubelet[3386]: I0513 23:44:12.756493 3386 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:44:12.756528 kubelet[3386]: I0513 23:44:12.756521 3386 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:44:12.756773 kubelet[3386]: I0513 23:44:12.756753 3386 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:44:12.761257 kubelet[3386]: I0513 23:44:12.759792 3386 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:44:12.764282 kubelet[3386]: I0513 23:44:12.762153 3386 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:44:12.767703 kubelet[3386]: I0513 23:44:12.767687 3386 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:44:12.771196 kubelet[3386]: I0513 23:44:12.771173 3386 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:44:12.771321 kubelet[3386]: I0513 23:44:12.771305 3386 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:44:12.771431 kubelet[3386]: I0513 23:44:12.771405 3386 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:44:12.771605 kubelet[3386]: I0513 23:44:12.771432 3386 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-13ce75130c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:44:12.771685 kubelet[3386]: I0513 23:44:12.771611 3386 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:44:12.771685 kubelet[3386]: I0513 23:44:12.771622 3386 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:44:12.771685 kubelet[3386]: I0513 23:44:12.771651 3386 state_mem.go:36] "Initialized new in-memory state store" May 13 23:44:12.771761 kubelet[3386]: I0513 23:44:12.771755 3386 kubelet.go:408] "Attempting to sync node with API server" May 13 23:44:12.771784 kubelet[3386]: I0513 23:44:12.771766 3386 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:44:12.771808 kubelet[3386]: I0513 23:44:12.771799 3386 kubelet.go:314] "Adding apiserver pod source" May 13 23:44:12.772089 kubelet[3386]: I0513 23:44:12.771807 3386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:44:12.776591 kubelet[3386]: I0513 23:44:12.776572 3386 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:44:12.777383 kubelet[3386]: I0513 23:44:12.777364 3386 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:44:12.778771 kubelet[3386]: I0513 23:44:12.778576 3386 server.go:1269] "Started kubelet" May 13 23:44:12.781558 kubelet[3386]: I0513 23:44:12.781537 3386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:44:12.787854 kubelet[3386]: I0513 23:44:12.786400 3386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:44:12.787854 kubelet[3386]: I0513 23:44:12.787457 3386 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:44:12.787854 kubelet[3386]: E0513 23:44:12.787623 3386 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-13ce75130c\" not found" May 13 23:44:12.788329 kubelet[3386]: I0513 23:44:12.788292 3386 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:44:12.788704 kubelet[3386]: I0513 23:44:12.788525 3386 reconciler.go:26] "Reconciler: start to sync state" May 13 23:44:12.793008 kubelet[3386]: I0513 23:44:12.792898 3386 factory.go:221] Registration of the systemd container factory successfully May 13 23:44:12.793061 kubelet[3386]: I0513 23:44:12.793041 3386 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:44:12.803332 kubelet[3386]: I0513 23:44:12.803303 3386 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:44:12.804309 kubelet[3386]: I0513 23:44:12.804289 3386 server.go:460] "Adding debug handlers to kubelet server" May 13 23:44:12.805177 kubelet[3386]: I0513 23:44:12.805132 3386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:44:12.805475 kubelet[3386]: I0513 23:44:12.805460 3386 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:44:12.814300 kubelet[3386]: I0513 23:44:12.814272 3386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:44:12.815297 kubelet[3386]: I0513 23:44:12.815280 3386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:44:12.815395 kubelet[3386]: I0513 23:44:12.815384 3386 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:44:12.815453 kubelet[3386]: I0513 23:44:12.815446 3386 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:44:12.815548 kubelet[3386]: E0513 23:44:12.815528 3386 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:44:12.828407 kubelet[3386]: I0513 23:44:12.828385 3386 factory.go:221] Registration of the containerd container factory successfully May 13 23:44:12.837435 kubelet[3386]: E0513 23:44:12.837396 3386 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:44:12.880564 kubelet[3386]: I0513 23:44:12.880540 3386 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:44:12.880708 kubelet[3386]: I0513 23:44:12.880696 3386 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:44:12.880765 kubelet[3386]: I0513 23:44:12.880758 3386 state_mem.go:36] "Initialized new in-memory state store" May 13 23:44:12.881008 kubelet[3386]: I0513 23:44:12.880980 3386 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:44:12.881082 kubelet[3386]: I0513 23:44:12.881059 3386 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:44:12.881127 kubelet[3386]: I0513 23:44:12.881119 3386 policy_none.go:49] "None policy: Start" May 13 23:44:12.881767 kubelet[3386]: I0513 23:44:12.881746 3386 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:44:12.881832 kubelet[3386]: I0513 23:44:12.881771 3386 state_mem.go:35] "Initializing new in-memory state store" May 13 23:44:12.881954 kubelet[3386]: I0513 23:44:12.881932 3386 state_mem.go:75] "Updated machine memory state" May 13 23:44:12.885902 kubelet[3386]: I0513 23:44:12.885876 3386 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:44:12.886072 kubelet[3386]: I0513 23:44:12.886036 3386 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:44:12.886110 kubelet[3386]: I0513 23:44:12.886057 3386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:44:12.886893 kubelet[3386]: I0513 23:44:12.886585 3386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:44:12.925787 kubelet[3386]: W0513 23:44:12.925753 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:12.928849 kubelet[3386]: W0513 23:44:12.928745 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:12.928849 kubelet[3386]: W0513 23:44:12.928786 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:12.990058 kubelet[3386]: I0513 23:44:12.989948 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/434f0aeec8f23baadc0dc024d40489c9-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-13ce75130c\" (UID: \"434f0aeec8f23baadc0dc024d40489c9\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-13ce75130c" May 13 23:44:12.992665 kubelet[3386]: I0513 23:44:12.992419 3386 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090457 kubelet[3386]: I0513 23:44:13.090412 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/434f0aeec8f23baadc0dc024d40489c9-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-13ce75130c\" (UID: \"434f0aeec8f23baadc0dc024d40489c9\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090457 kubelet[3386]: I0513 23:44:13.090461 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/434f0aeec8f23baadc0dc024d40489c9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-13ce75130c\" (UID: \"434f0aeec8f23baadc0dc024d40489c9\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090640 kubelet[3386]: I0513 23:44:13.090487 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090640 kubelet[3386]: I0513 23:44:13.090508 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090640 kubelet[3386]: I0513 23:44:13.090528 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090711 kubelet[3386]: I0513 23:44:13.090545 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090817 kubelet[3386]: I0513 23:44:13.090769 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f521cd8d04529047a06d14ad6a23f47a-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" (UID: \"f521cd8d04529047a06d14ad6a23f47a\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.090939 kubelet[3386]: I0513 23:44:13.090844 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09580dda23cdcc28e89d9d307dd2356d-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-13ce75130c\" (UID: \"09580dda23cdcc28e89d9d307dd2356d\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-13ce75130c" May 13 23:44:13.220487 kubelet[3386]: I0513 23:44:13.220436 3386 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:14.275126 kubelet[3386]: I0513 23:44:13.772320 3386 apiserver.go:52] "Watching apiserver" May 13 23:44:14.275126 kubelet[3386]: I0513 23:44:13.789416 3386 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:44:14.276623 kubelet[3386]: I0513 23:44:14.275520 3386 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-13ce75130c" May 13 23:44:14.285330 kubelet[3386]: W0513 23:44:14.285308 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:14.285481 kubelet[3386]: E0513 23:44:14.285466 3386 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-n-13ce75130c\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" May 13 23:44:14.302040 kubelet[3386]: I0513 23:44:14.301990 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-13ce75130c" podStartSLOduration=2.301977794 podStartE2EDuration="2.301977794s" podCreationTimestamp="2025-05-13 23:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:14.300710632 +0000 UTC m=+1.597647281" watchObservedRunningTime="2025-05-13 23:44:14.301977794 +0000 UTC m=+1.598914403" May 13 23:44:14.312052 kubelet[3386]: I0513 23:44:14.311726 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-13ce75130c" podStartSLOduration=2.311709487 podStartE2EDuration="2.311709487s" podCreationTimestamp="2025-05-13 23:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:14.311584807 +0000 UTC m=+1.608521416" watchObservedRunningTime="2025-05-13 23:44:14.311709487 +0000 UTC m=+1.608646096" May 13 23:44:14.333513 kubelet[3386]: I0513 23:44:14.333437 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-13ce75130c" podStartSLOduration=2.333419555 podStartE2EDuration="2.333419555s" podCreationTimestamp="2025-05-13 23:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:14.332247394 +0000 UTC m=+1.629184043" watchObservedRunningTime="2025-05-13 23:44:14.333419555 +0000 UTC m=+1.630356164" May 13 23:44:16.359525 kubelet[3386]: I0513 23:44:16.359404 3386 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:44:16.362242 containerd[1765]: time="2025-05-13T23:44:16.360276125Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:44:16.362506 kubelet[3386]: I0513 23:44:16.360542 3386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:44:17.082627 systemd[1]: Created slice kubepods-besteffort-podfcfa5ca6_4286_4ce8_a4f4_c38238424562.slice - libcontainer container kubepods-besteffort-podfcfa5ca6_4286_4ce8_a4f4_c38238424562.slice. May 13 23:44:17.114337 kubelet[3386]: I0513 23:44:17.114168 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcfa5ca6-4286-4ce8-a4f4-c38238424562-lib-modules\") pod \"kube-proxy-pvdbb\" (UID: \"fcfa5ca6-4286-4ce8-a4f4-c38238424562\") " pod="kube-system/kube-proxy-pvdbb" May 13 23:44:17.114337 kubelet[3386]: I0513 23:44:17.114234 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgk4k\" (UniqueName: \"kubernetes.io/projected/fcfa5ca6-4286-4ce8-a4f4-c38238424562-kube-api-access-jgk4k\") pod \"kube-proxy-pvdbb\" (UID: \"fcfa5ca6-4286-4ce8-a4f4-c38238424562\") " pod="kube-system/kube-proxy-pvdbb" May 13 23:44:17.114337 kubelet[3386]: I0513 23:44:17.114257 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcfa5ca6-4286-4ce8-a4f4-c38238424562-xtables-lock\") pod \"kube-proxy-pvdbb\" (UID: \"fcfa5ca6-4286-4ce8-a4f4-c38238424562\") " pod="kube-system/kube-proxy-pvdbb" May 13 23:44:17.114337 kubelet[3386]: I0513 23:44:17.114278 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fcfa5ca6-4286-4ce8-a4f4-c38238424562-kube-proxy\") pod \"kube-proxy-pvdbb\" (UID: \"fcfa5ca6-4286-4ce8-a4f4-c38238424562\") " pod="kube-system/kube-proxy-pvdbb" May 13 23:44:17.310733 systemd[1]: Created slice kubepods-besteffort-pod89c3b268_2b7f_4aed_914e_a72ec810d6f4.slice - libcontainer container kubepods-besteffort-pod89c3b268_2b7f_4aed_914e_a72ec810d6f4.slice. May 13 23:44:17.316307 kubelet[3386]: I0513 23:44:17.316003 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89c3b268-2b7f-4aed-914e-a72ec810d6f4-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-klgw5\" (UID: \"89c3b268-2b7f-4aed-914e-a72ec810d6f4\") " pod="tigera-operator/tigera-operator-6f6897fdc5-klgw5" May 13 23:44:17.316463 kubelet[3386]: I0513 23:44:17.316318 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqlx6\" (UniqueName: \"kubernetes.io/projected/89c3b268-2b7f-4aed-914e-a72ec810d6f4-kube-api-access-jqlx6\") pod \"tigera-operator-6f6897fdc5-klgw5\" (UID: \"89c3b268-2b7f-4aed-914e-a72ec810d6f4\") " pod="tigera-operator/tigera-operator-6f6897fdc5-klgw5" May 13 23:44:17.316650 sudo[2214]: pam_unix(sudo:session): session closed for user root May 13 23:44:17.391200 containerd[1765]: time="2025-05-13T23:44:17.391159912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvdbb,Uid:fcfa5ca6-4286-4ce8-a4f4-c38238424562,Namespace:kube-system,Attempt:0,}" May 13 23:44:17.405655 sshd[2213]: Connection closed by 10.200.16.10 port 44510 May 13 23:44:17.406126 sshd-session[2211]: pam_unix(sshd:session): session closed for user core May 13 23:44:17.409330 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:44510.service: Deactivated successfully. May 13 23:44:17.411724 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:44:17.411990 systemd[1]: session-9.scope: Consumed 6.046s CPU time, 225.7M memory peak. May 13 23:44:17.413755 systemd-logind[1726]: Session 9 logged out. Waiting for processes to exit. May 13 23:44:17.414638 systemd-logind[1726]: Removed session 9. May 13 23:44:17.616726 containerd[1765]: time="2025-05-13T23:44:17.616678287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-klgw5,Uid:89c3b268-2b7f-4aed-914e-a72ec810d6f4,Namespace:tigera-operator,Attempt:0,}" May 13 23:44:17.939647 containerd[1765]: time="2025-05-13T23:44:17.939516749Z" level=info msg="connecting to shim 6ae1e29d93d46bc2de7b0d75e77bca7dd576d687ec2d9e90a4dacfbabd522552" address="unix:///run/containerd/s/5cc56c8d0c12337f1e69a9fc5ba384deb767242e14e816f6ac5f19e91bae9c99" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:17.963381 systemd[1]: Started cri-containerd-6ae1e29d93d46bc2de7b0d75e77bca7dd576d687ec2d9e90a4dacfbabd522552.scope - libcontainer container 6ae1e29d93d46bc2de7b0d75e77bca7dd576d687ec2d9e90a4dacfbabd522552. May 13 23:44:18.030536 containerd[1765]: time="2025-05-13T23:44:18.030427428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvdbb,Uid:fcfa5ca6-4286-4ce8-a4f4-c38238424562,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae1e29d93d46bc2de7b0d75e77bca7dd576d687ec2d9e90a4dacfbabd522552\"" May 13 23:44:18.035345 containerd[1765]: time="2025-05-13T23:44:18.033761392Z" level=info msg="CreateContainer within sandbox \"6ae1e29d93d46bc2de7b0d75e77bca7dd576d687ec2d9e90a4dacfbabd522552\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:44:18.279087 containerd[1765]: time="2025-05-13T23:44:18.278960913Z" level=info msg="connecting to shim b1a97643b76886a7f4814315cac5d4702776ef88bef3347086b9f0fb93800e82" address="unix:///run/containerd/s/3a242ad19211a66ea649a3d4df3ea76c2f5bd958d180bbecb9e15b0bb5c683b3" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:18.307387 systemd[1]: Started cri-containerd-b1a97643b76886a7f4814315cac5d4702776ef88bef3347086b9f0fb93800e82.scope - libcontainer container b1a97643b76886a7f4814315cac5d4702776ef88bef3347086b9f0fb93800e82. May 13 23:44:18.435355 containerd[1765]: time="2025-05-13T23:44:18.433931875Z" level=info msg="Container 292fa030ccba8c84e2b8083ff6f898c6a13f065a3b9c62a35f92d860f8def8ee: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:18.478715 containerd[1765]: time="2025-05-13T23:44:18.478607534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-klgw5,Uid:89c3b268-2b7f-4aed-914e-a72ec810d6f4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b1a97643b76886a7f4814315cac5d4702776ef88bef3347086b9f0fb93800e82\"" May 13 23:44:18.481647 containerd[1765]: time="2025-05-13T23:44:18.481606738Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:44:18.628649 containerd[1765]: time="2025-05-13T23:44:18.628477650Z" level=info msg="CreateContainer within sandbox \"6ae1e29d93d46bc2de7b0d75e77bca7dd576d687ec2d9e90a4dacfbabd522552\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"292fa030ccba8c84e2b8083ff6f898c6a13f065a3b9c62a35f92d860f8def8ee\"" May 13 23:44:18.630748 containerd[1765]: time="2025-05-13T23:44:18.629423371Z" level=info msg="StartContainer for \"292fa030ccba8c84e2b8083ff6f898c6a13f065a3b9c62a35f92d860f8def8ee\"" May 13 23:44:18.631054 containerd[1765]: time="2025-05-13T23:44:18.630832653Z" level=info msg="connecting to shim 292fa030ccba8c84e2b8083ff6f898c6a13f065a3b9c62a35f92d860f8def8ee" address="unix:///run/containerd/s/5cc56c8d0c12337f1e69a9fc5ba384deb767242e14e816f6ac5f19e91bae9c99" protocol=ttrpc version=3 May 13 23:44:18.655357 systemd[1]: Started cri-containerd-292fa030ccba8c84e2b8083ff6f898c6a13f065a3b9c62a35f92d860f8def8ee.scope - libcontainer container 292fa030ccba8c84e2b8083ff6f898c6a13f065a3b9c62a35f92d860f8def8ee. May 13 23:44:18.724283 containerd[1765]: time="2025-05-13T23:44:18.724234095Z" level=info msg="StartContainer for \"292fa030ccba8c84e2b8083ff6f898c6a13f065a3b9c62a35f92d860f8def8ee\" returns successfully" May 13 23:44:21.504539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408810441.mount: Deactivated successfully. May 13 23:44:22.481251 containerd[1765]: time="2025-05-13T23:44:22.481064447Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:22.528900 containerd[1765]: time="2025-05-13T23:44:22.528836830Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 23:44:22.534000 containerd[1765]: time="2025-05-13T23:44:22.533941717Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:22.578252 containerd[1765]: time="2025-05-13T23:44:22.577983735Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:22.578931 containerd[1765]: time="2025-05-13T23:44:22.578896056Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 4.097243798s" May 13 23:44:22.578931 containerd[1765]: time="2025-05-13T23:44:22.578927776Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 23:44:22.582117 containerd[1765]: time="2025-05-13T23:44:22.581565019Z" level=info msg="CreateContainer within sandbox \"b1a97643b76886a7f4814315cac5d4702776ef88bef3347086b9f0fb93800e82\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:44:22.718187 containerd[1765]: time="2025-05-13T23:44:22.718137558Z" level=info msg="Container 80cedf21318dbd2e520a6c351cbd55cfe8833a91f8c4a24b9e86377a574143a3: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:22.831587 containerd[1765]: time="2025-05-13T23:44:22.831469427Z" level=info msg="CreateContainer within sandbox \"b1a97643b76886a7f4814315cac5d4702776ef88bef3347086b9f0fb93800e82\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"80cedf21318dbd2e520a6c351cbd55cfe8833a91f8c4a24b9e86377a574143a3\"" May 13 23:44:22.832568 containerd[1765]: time="2025-05-13T23:44:22.832375748Z" level=info msg="StartContainer for \"80cedf21318dbd2e520a6c351cbd55cfe8833a91f8c4a24b9e86377a574143a3\"" May 13 23:44:22.833600 containerd[1765]: time="2025-05-13T23:44:22.833139709Z" level=info msg="connecting to shim 80cedf21318dbd2e520a6c351cbd55cfe8833a91f8c4a24b9e86377a574143a3" address="unix:///run/containerd/s/3a242ad19211a66ea649a3d4df3ea76c2f5bd958d180bbecb9e15b0bb5c683b3" protocol=ttrpc version=3 May 13 23:44:22.852371 systemd[1]: Started cri-containerd-80cedf21318dbd2e520a6c351cbd55cfe8833a91f8c4a24b9e86377a574143a3.scope - libcontainer container 80cedf21318dbd2e520a6c351cbd55cfe8833a91f8c4a24b9e86377a574143a3. May 13 23:44:22.885360 containerd[1765]: time="2025-05-13T23:44:22.885328178Z" level=info msg="StartContainer for \"80cedf21318dbd2e520a6c351cbd55cfe8833a91f8c4a24b9e86377a574143a3\" returns successfully" May 13 23:44:23.490378 kubelet[3386]: I0513 23:44:23.490101 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvdbb" podStartSLOduration=6.490085091 podStartE2EDuration="6.490085091s" podCreationTimestamp="2025-05-13 23:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:18.885694066 +0000 UTC m=+6.182630675" watchObservedRunningTime="2025-05-13 23:44:23.490085091 +0000 UTC m=+10.787021700" May 13 23:44:27.324301 kubelet[3386]: I0513 23:44:27.324233 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-klgw5" podStartSLOduration=6.224205678 podStartE2EDuration="10.324205s" podCreationTimestamp="2025-05-13 23:44:17 +0000 UTC" firstStartedPulling="2025-05-13 23:44:18.479780335 +0000 UTC m=+5.776716944" lastFinishedPulling="2025-05-13 23:44:22.579779657 +0000 UTC m=+9.876716266" observedRunningTime="2025-05-13 23:44:23.898842427 +0000 UTC m=+11.195779076" watchObservedRunningTime="2025-05-13 23:44:27.324205 +0000 UTC m=+14.621141609" May 13 23:44:27.333877 kubelet[3386]: W0513 23:44:27.332296 3386 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4284.0.0-n-13ce75130c" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4284.0.0-n-13ce75130c' and this object May 13 23:44:27.333877 kubelet[3386]: E0513 23:44:27.332347 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4284.0.0-n-13ce75130c\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-13ce75130c' and this object" logger="UnhandledError" May 13 23:44:27.333877 kubelet[3386]: W0513 23:44:27.332399 3386 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4284.0.0-n-13ce75130c" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4284.0.0-n-13ce75130c' and this object May 13 23:44:27.333877 kubelet[3386]: E0513 23:44:27.332409 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4284.0.0-n-13ce75130c\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-13ce75130c' and this object" logger="UnhandledError" May 13 23:44:27.333877 kubelet[3386]: W0513 23:44:27.331894 3386 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284.0.0-n-13ce75130c" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4284.0.0-n-13ce75130c' and this object May 13 23:44:27.334089 kubelet[3386]: E0513 23:44:27.332550 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4284.0.0-n-13ce75130c\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-13ce75130c' and this object" logger="UnhandledError" May 13 23:44:27.336766 systemd[1]: Created slice kubepods-besteffort-pode0ad5a15_3ec0_4420_8855_56d5eb85ddf6.slice - libcontainer container kubepods-besteffort-pode0ad5a15_3ec0_4420_8855_56d5eb85ddf6.slice. May 13 23:44:27.366506 kubelet[3386]: I0513 23:44:27.366419 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtl8\" (UniqueName: \"kubernetes.io/projected/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-kube-api-access-pgtl8\") pod \"calico-typha-67c7dd6748-96s67\" (UID: \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\") " pod="calico-system/calico-typha-67c7dd6748-96s67" May 13 23:44:27.366637 kubelet[3386]: I0513 23:44:27.366610 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-tigera-ca-bundle\") pod \"calico-typha-67c7dd6748-96s67\" (UID: \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\") " pod="calico-system/calico-typha-67c7dd6748-96s67" May 13 23:44:27.366666 kubelet[3386]: I0513 23:44:27.366635 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-typha-certs\") pod \"calico-typha-67c7dd6748-96s67\" (UID: \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\") " pod="calico-system/calico-typha-67c7dd6748-96s67" May 13 23:44:27.823961 systemd[1]: Created slice kubepods-besteffort-pod56848f67_cdcb_434e_b946_495c63c2981d.slice - libcontainer container kubepods-besteffort-pod56848f67_cdcb_434e_b946_495c63c2981d.slice. May 13 23:44:27.872303 kubelet[3386]: I0513 23:44:27.871913 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-lib-modules\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872303 kubelet[3386]: I0513 23:44:27.871958 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-policysync\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872303 kubelet[3386]: I0513 23:44:27.871978 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56848f67-cdcb-434e-b946-495c63c2981d-node-certs\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872303 kubelet[3386]: I0513 23:44:27.871995 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-run-calico\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872303 kubelet[3386]: I0513 23:44:27.872013 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-log-dir\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872654 kubelet[3386]: I0513 23:44:27.872029 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fdt\" (UniqueName: \"kubernetes.io/projected/56848f67-cdcb-434e-b946-495c63c2981d-kube-api-access-p6fdt\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872654 kubelet[3386]: I0513 23:44:27.872062 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56848f67-cdcb-434e-b946-495c63c2981d-tigera-ca-bundle\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872654 kubelet[3386]: I0513 23:44:27.872080 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-lib-calico\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872654 kubelet[3386]: I0513 23:44:27.872096 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-bin-dir\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872654 kubelet[3386]: I0513 23:44:27.872119 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-net-dir\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872762 kubelet[3386]: I0513 23:44:27.872136 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-flexvol-driver-host\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.872762 kubelet[3386]: I0513 23:44:27.872161 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-xtables-lock\") pod \"calico-node-4pt9x\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " pod="calico-system/calico-node-4pt9x" May 13 23:44:27.975826 kubelet[3386]: E0513 23:44:27.975800 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.976068 kubelet[3386]: W0513 23:44:27.975890 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.976068 kubelet[3386]: E0513 23:44:27.975912 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.976628 kubelet[3386]: E0513 23:44:27.976612 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.976870 kubelet[3386]: W0513 23:44:27.976706 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.976870 kubelet[3386]: E0513 23:44:27.976723 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.977251 kubelet[3386]: E0513 23:44:27.977149 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.977251 kubelet[3386]: W0513 23:44:27.977172 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.977251 kubelet[3386]: E0513 23:44:27.977185 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.977908 kubelet[3386]: E0513 23:44:27.977534 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.977908 kubelet[3386]: W0513 23:44:27.977643 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.977908 kubelet[3386]: E0513 23:44:27.977658 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.978146 kubelet[3386]: E0513 23:44:27.978133 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.978206 kubelet[3386]: W0513 23:44:27.978194 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.978290 kubelet[3386]: E0513 23:44:27.978279 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.982001 kubelet[3386]: E0513 23:44:27.978469 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.982001 kubelet[3386]: W0513 23:44:27.980769 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.982001 kubelet[3386]: E0513 23:44:27.980808 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.984632 kubelet[3386]: E0513 23:44:27.984516 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.984632 kubelet[3386]: W0513 23:44:27.984625 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.984820 kubelet[3386]: E0513 23:44:27.984792 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.985251 kubelet[3386]: E0513 23:44:27.985230 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.985251 kubelet[3386]: W0513 23:44:27.985246 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.985454 kubelet[3386]: E0513 23:44:27.985426 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.985536 kubelet[3386]: E0513 23:44:27.985515 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.985536 kubelet[3386]: W0513 23:44:27.985531 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.985536 kubelet[3386]: E0513 23:44:27.985554 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.985894 kubelet[3386]: E0513 23:44:27.985873 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.985894 kubelet[3386]: W0513 23:44:27.985888 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.985964 kubelet[3386]: E0513 23:44:27.985905 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.986263 kubelet[3386]: E0513 23:44:27.986245 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.986263 kubelet[3386]: W0513 23:44:27.986261 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.986316 kubelet[3386]: E0513 23:44:27.986271 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.986550 kubelet[3386]: E0513 23:44:27.986532 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.986550 kubelet[3386]: W0513 23:44:27.986546 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.986614 kubelet[3386]: E0513 23:44:27.986557 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:27.987019 kubelet[3386]: E0513 23:44:27.987001 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:27.987019 kubelet[3386]: W0513 23:44:27.987016 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:27.987076 kubelet[3386]: E0513 23:44:27.987026 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.061981 kubelet[3386]: E0513 23:44:28.061300 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:28.070367 kubelet[3386]: E0513 23:44:28.070338 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.070367 kubelet[3386]: W0513 23:44:28.070361 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.070529 kubelet[3386]: E0513 23:44:28.070380 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.070529 kubelet[3386]: E0513 23:44:28.070523 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.070573 kubelet[3386]: W0513 23:44:28.070531 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.070573 kubelet[3386]: E0513 23:44:28.070541 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.070672 kubelet[3386]: E0513 23:44:28.070658 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.070672 kubelet[3386]: W0513 23:44:28.070670 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.070759 kubelet[3386]: E0513 23:44:28.070678 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.070806 kubelet[3386]: E0513 23:44:28.070798 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.070806 kubelet[3386]: W0513 23:44:28.070805 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.070884 kubelet[3386]: E0513 23:44:28.070813 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.070963 kubelet[3386]: E0513 23:44:28.070948 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.070963 kubelet[3386]: W0513 23:44:28.070959 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.071019 kubelet[3386]: E0513 23:44:28.070967 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.071093 kubelet[3386]: E0513 23:44:28.071080 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.071093 kubelet[3386]: W0513 23:44:28.071091 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.071248 kubelet[3386]: E0513 23:44:28.071100 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.071248 kubelet[3386]: E0513 23:44:28.071241 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.071330 kubelet[3386]: W0513 23:44:28.071249 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.071330 kubelet[3386]: E0513 23:44:28.071258 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.071400 kubelet[3386]: E0513 23:44:28.071386 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.071400 kubelet[3386]: W0513 23:44:28.071397 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.071457 kubelet[3386]: E0513 23:44:28.071405 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.071552 kubelet[3386]: E0513 23:44:28.071534 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.071552 kubelet[3386]: W0513 23:44:28.071547 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.071645 kubelet[3386]: E0513 23:44:28.071555 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.071685 kubelet[3386]: E0513 23:44:28.071669 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.071685 kubelet[3386]: W0513 23:44:28.071680 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.071752 kubelet[3386]: E0513 23:44:28.071687 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.071819 kubelet[3386]: E0513 23:44:28.071805 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.071819 kubelet[3386]: W0513 23:44:28.071816 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.071879 kubelet[3386]: E0513 23:44:28.071823 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.071954 kubelet[3386]: E0513 23:44:28.071941 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.071954 kubelet[3386]: W0513 23:44:28.071952 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.072017 kubelet[3386]: E0513 23:44:28.071961 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.072094 kubelet[3386]: E0513 23:44:28.072082 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.072094 kubelet[3386]: W0513 23:44:28.072093 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.072151 kubelet[3386]: E0513 23:44:28.072101 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.072246 kubelet[3386]: E0513 23:44:28.072232 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.072246 kubelet[3386]: W0513 23:44:28.072244 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.072315 kubelet[3386]: E0513 23:44:28.072252 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.072388 kubelet[3386]: E0513 23:44:28.072376 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.072388 kubelet[3386]: W0513 23:44:28.072387 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.072453 kubelet[3386]: E0513 23:44:28.072394 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.072528 kubelet[3386]: E0513 23:44:28.072514 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.072528 kubelet[3386]: W0513 23:44:28.072525 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.072623 kubelet[3386]: E0513 23:44:28.072532 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.072697 kubelet[3386]: E0513 23:44:28.072682 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.072697 kubelet[3386]: W0513 23:44:28.072694 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.072758 kubelet[3386]: E0513 23:44:28.072703 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.072835 kubelet[3386]: E0513 23:44:28.072822 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.072835 kubelet[3386]: W0513 23:44:28.072833 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.072892 kubelet[3386]: E0513 23:44:28.072850 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.072978 kubelet[3386]: E0513 23:44:28.072966 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.072978 kubelet[3386]: W0513 23:44:28.072976 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.073044 kubelet[3386]: E0513 23:44:28.072984 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.073114 kubelet[3386]: E0513 23:44:28.073102 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.073114 kubelet[3386]: W0513 23:44:28.073112 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.073174 kubelet[3386]: E0513 23:44:28.073119 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.073377 kubelet[3386]: E0513 23:44:28.073360 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.073377 kubelet[3386]: W0513 23:44:28.073374 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.073456 kubelet[3386]: E0513 23:44:28.073386 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.073541 kubelet[3386]: E0513 23:44:28.073527 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.073541 kubelet[3386]: W0513 23:44:28.073539 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.073602 kubelet[3386]: E0513 23:44:28.073548 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.073692 kubelet[3386]: E0513 23:44:28.073679 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.073692 kubelet[3386]: W0513 23:44:28.073690 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.073751 kubelet[3386]: E0513 23:44:28.073698 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.073830 kubelet[3386]: E0513 23:44:28.073818 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.073830 kubelet[3386]: W0513 23:44:28.073828 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.073904 kubelet[3386]: E0513 23:44:28.073835 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.073993 kubelet[3386]: E0513 23:44:28.073979 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.073993 kubelet[3386]: W0513 23:44:28.073991 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.074047 kubelet[3386]: E0513 23:44:28.073999 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.074252 kubelet[3386]: E0513 23:44:28.074168 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.074252 kubelet[3386]: W0513 23:44:28.074180 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.074252 kubelet[3386]: E0513 23:44:28.074189 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.074457 kubelet[3386]: E0513 23:44:28.074420 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.074457 kubelet[3386]: W0513 23:44:28.074451 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.074690 kubelet[3386]: E0513 23:44:28.074462 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.074690 kubelet[3386]: E0513 23:44:28.074599 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.074690 kubelet[3386]: W0513 23:44:28.074606 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.074690 kubelet[3386]: E0513 23:44:28.074614 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.075431 kubelet[3386]: E0513 23:44:28.074756 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.075431 kubelet[3386]: W0513 23:44:28.074764 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.075431 kubelet[3386]: E0513 23:44:28.074772 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.075431 kubelet[3386]: E0513 23:44:28.074902 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.075431 kubelet[3386]: W0513 23:44:28.074927 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.075431 kubelet[3386]: E0513 23:44:28.074939 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.075431 kubelet[3386]: E0513 23:44:28.075088 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.075431 kubelet[3386]: W0513 23:44:28.075096 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.075431 kubelet[3386]: E0513 23:44:28.075104 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.075431 kubelet[3386]: E0513 23:44:28.075248 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.075643 kubelet[3386]: W0513 23:44:28.075256 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.075643 kubelet[3386]: E0513 23:44:28.075264 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.075643 kubelet[3386]: E0513 23:44:28.075520 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.075643 kubelet[3386]: W0513 23:44:28.075530 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.075643 kubelet[3386]: E0513 23:44:28.075540 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.075740 kubelet[3386]: E0513 23:44:28.075666 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.075740 kubelet[3386]: W0513 23:44:28.075673 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.075740 kubelet[3386]: E0513 23:44:28.075681 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.075797 kubelet[3386]: E0513 23:44:28.075781 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.075797 kubelet[3386]: W0513 23:44:28.075788 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.075797 kubelet[3386]: E0513 23:44:28.075795 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.076758 kubelet[3386]: E0513 23:44:28.075963 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.076758 kubelet[3386]: W0513 23:44:28.075971 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.076758 kubelet[3386]: E0513 23:44:28.075979 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.076758 kubelet[3386]: I0513 23:44:28.076000 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74844576-5d15-4959-aa52-fdc4c5736ba8-kubelet-dir\") pod \"csi-node-driver-kqzwk\" (UID: \"74844576-5d15-4959-aa52-fdc4c5736ba8\") " pod="calico-system/csi-node-driver-kqzwk" May 13 23:44:28.076758 kubelet[3386]: E0513 23:44:28.076120 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.076758 kubelet[3386]: W0513 23:44:28.076128 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.076758 kubelet[3386]: E0513 23:44:28.076135 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.076758 kubelet[3386]: I0513 23:44:28.076148 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zds\" (UniqueName: \"kubernetes.io/projected/74844576-5d15-4959-aa52-fdc4c5736ba8-kube-api-access-p4zds\") pod \"csi-node-driver-kqzwk\" (UID: \"74844576-5d15-4959-aa52-fdc4c5736ba8\") " pod="calico-system/csi-node-driver-kqzwk" May 13 23:44:28.076758 kubelet[3386]: E0513 23:44:28.076353 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.077576 kubelet[3386]: W0513 23:44:28.076361 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.077576 kubelet[3386]: E0513 23:44:28.076371 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.077576 kubelet[3386]: I0513 23:44:28.076388 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74844576-5d15-4959-aa52-fdc4c5736ba8-registration-dir\") pod \"csi-node-driver-kqzwk\" (UID: \"74844576-5d15-4959-aa52-fdc4c5736ba8\") " pod="calico-system/csi-node-driver-kqzwk" May 13 23:44:28.077576 kubelet[3386]: E0513 23:44:28.076631 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.077576 kubelet[3386]: W0513 23:44:28.076644 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.077576 kubelet[3386]: E0513 23:44:28.076669 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.077576 kubelet[3386]: E0513 23:44:28.076825 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.077576 kubelet[3386]: W0513 23:44:28.076832 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.077576 kubelet[3386]: E0513 23:44:28.076841 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.077757 kubelet[3386]: E0513 23:44:28.076962 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.077757 kubelet[3386]: W0513 23:44:28.076969 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.077757 kubelet[3386]: E0513 23:44:28.076982 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.077757 kubelet[3386]: I0513 23:44:28.077003 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/74844576-5d15-4959-aa52-fdc4c5736ba8-varrun\") pod \"csi-node-driver-kqzwk\" (UID: \"74844576-5d15-4959-aa52-fdc4c5736ba8\") " pod="calico-system/csi-node-driver-kqzwk" May 13 23:44:28.077757 kubelet[3386]: E0513 23:44:28.077128 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.077757 kubelet[3386]: W0513 23:44:28.077135 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.077757 kubelet[3386]: E0513 23:44:28.077149 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.077757 kubelet[3386]: I0513 23:44:28.077163 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74844576-5d15-4959-aa52-fdc4c5736ba8-socket-dir\") pod \"csi-node-driver-kqzwk\" (UID: \"74844576-5d15-4959-aa52-fdc4c5736ba8\") " pod="calico-system/csi-node-driver-kqzwk" May 13 23:44:28.079049 kubelet[3386]: E0513 23:44:28.078756 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.079049 kubelet[3386]: W0513 23:44:28.078770 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.079049 kubelet[3386]: E0513 23:44:28.078792 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.079705 kubelet[3386]: E0513 23:44:28.079632 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.079705 kubelet[3386]: W0513 23:44:28.079646 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.079793 kubelet[3386]: E0513 23:44:28.079726 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.079864 kubelet[3386]: E0513 23:44:28.079843 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.079864 kubelet[3386]: W0513 23:44:28.079859 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.079957 kubelet[3386]: E0513 23:44:28.079885 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.080019 kubelet[3386]: E0513 23:44:28.080000 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.080019 kubelet[3386]: W0513 23:44:28.080013 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.080101 kubelet[3386]: E0513 23:44:28.080083 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.080150 kubelet[3386]: E0513 23:44:28.080145 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.080176 kubelet[3386]: W0513 23:44:28.080152 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.080301 kubelet[3386]: E0513 23:44:28.080282 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.080400 kubelet[3386]: E0513 23:44:28.080383 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.080400 kubelet[3386]: W0513 23:44:28.080397 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.080494 kubelet[3386]: E0513 23:44:28.080421 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.080547 kubelet[3386]: E0513 23:44:28.080531 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.080547 kubelet[3386]: W0513 23:44:28.080543 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.080636 kubelet[3386]: E0513 23:44:28.080617 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.080679 kubelet[3386]: E0513 23:44:28.080663 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.080679 kubelet[3386]: W0513 23:44:28.080669 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.080679 kubelet[3386]: E0513 23:44:28.080678 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.080851 kubelet[3386]: E0513 23:44:28.080834 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.080851 kubelet[3386]: W0513 23:44:28.080848 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.081016 kubelet[3386]: E0513 23:44:28.080857 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.081097 kubelet[3386]: E0513 23:44:28.081083 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.081148 kubelet[3386]: W0513 23:44:28.081137 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.081326 kubelet[3386]: E0513 23:44:28.081198 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.081426 kubelet[3386]: E0513 23:44:28.081414 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.081479 kubelet[3386]: W0513 23:44:28.081467 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.081548 kubelet[3386]: E0513 23:44:28.081536 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.081758 kubelet[3386]: E0513 23:44:28.081747 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.081929 kubelet[3386]: W0513 23:44:28.081819 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.081929 kubelet[3386]: E0513 23:44:28.081836 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.082059 kubelet[3386]: E0513 23:44:28.082048 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.082112 kubelet[3386]: W0513 23:44:28.082102 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.082169 kubelet[3386]: E0513 23:44:28.082158 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.183354 kubelet[3386]: E0513 23:44:28.183327 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.183693 kubelet[3386]: W0513 23:44:28.183559 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.183693 kubelet[3386]: E0513 23:44:28.183586 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.183992 kubelet[3386]: E0513 23:44:28.183880 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.183992 kubelet[3386]: W0513 23:44:28.183892 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.183992 kubelet[3386]: E0513 23:44:28.183911 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.184155 kubelet[3386]: E0513 23:44:28.184144 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.184230 kubelet[3386]: W0513 23:44:28.184199 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.184388 kubelet[3386]: E0513 23:44:28.184284 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.184510 kubelet[3386]: E0513 23:44:28.184499 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.184569 kubelet[3386]: W0513 23:44:28.184558 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.184634 kubelet[3386]: E0513 23:44:28.184624 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.184883 kubelet[3386]: E0513 23:44:28.184864 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.184883 kubelet[3386]: W0513 23:44:28.184880 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.184960 kubelet[3386]: E0513 23:44:28.184897 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.185137 kubelet[3386]: E0513 23:44:28.185041 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.185137 kubelet[3386]: W0513 23:44:28.185056 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.185137 kubelet[3386]: E0513 23:44:28.185066 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.185419 kubelet[3386]: E0513 23:44:28.185237 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.185419 kubelet[3386]: W0513 23:44:28.185249 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.185419 kubelet[3386]: E0513 23:44:28.185266 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.185547 kubelet[3386]: E0513 23:44:28.185535 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.185602 kubelet[3386]: W0513 23:44:28.185590 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.185668 kubelet[3386]: E0513 23:44:28.185658 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.185865 kubelet[3386]: E0513 23:44:28.185853 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.185977 kubelet[3386]: W0513 23:44:28.185964 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.186050 kubelet[3386]: E0513 23:44:28.186032 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.186336 kubelet[3386]: E0513 23:44:28.186249 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.186336 kubelet[3386]: W0513 23:44:28.186260 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.186336 kubelet[3386]: E0513 23:44:28.186283 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.186488 kubelet[3386]: E0513 23:44:28.186476 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.186540 kubelet[3386]: W0513 23:44:28.186530 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.186670 kubelet[3386]: E0513 23:44:28.186603 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.186768 kubelet[3386]: E0513 23:44:28.186756 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.186829 kubelet[3386]: W0513 23:44:28.186818 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.186979 kubelet[3386]: E0513 23:44:28.186886 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.187327 kubelet[3386]: E0513 23:44:28.187060 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.187327 kubelet[3386]: W0513 23:44:28.187074 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.187327 kubelet[3386]: E0513 23:44:28.187092 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.187327 kubelet[3386]: E0513 23:44:28.187323 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.187455 kubelet[3386]: W0513 23:44:28.187334 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.187455 kubelet[3386]: E0513 23:44:28.187354 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.187640 kubelet[3386]: E0513 23:44:28.187503 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.187640 kubelet[3386]: W0513 23:44:28.187526 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.187640 kubelet[3386]: E0513 23:44:28.187540 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.188343 kubelet[3386]: E0513 23:44:28.188313 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.188343 kubelet[3386]: W0513 23:44:28.188336 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.188459 kubelet[3386]: E0513 23:44:28.188437 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.188588 kubelet[3386]: E0513 23:44:28.188570 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.188588 kubelet[3386]: W0513 23:44:28.188585 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.188725 kubelet[3386]: E0513 23:44:28.188617 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.188979 kubelet[3386]: E0513 23:44:28.188952 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.188979 kubelet[3386]: W0513 23:44:28.188974 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.188979 kubelet[3386]: E0513 23:44:28.188994 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.189316 kubelet[3386]: E0513 23:44:28.189294 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.189593 kubelet[3386]: W0513 23:44:28.189563 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.189651 kubelet[3386]: E0513 23:44:28.189609 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.189974 kubelet[3386]: E0513 23:44:28.189954 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.189974 kubelet[3386]: W0513 23:44:28.189969 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.190077 kubelet[3386]: E0513 23:44:28.190051 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.190189 kubelet[3386]: E0513 23:44:28.190173 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.190189 kubelet[3386]: W0513 23:44:28.190186 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.190354 kubelet[3386]: E0513 23:44:28.190231 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.190385 kubelet[3386]: E0513 23:44:28.190373 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.190385 kubelet[3386]: W0513 23:44:28.190382 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.190477 kubelet[3386]: E0513 23:44:28.190397 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.190543 kubelet[3386]: E0513 23:44:28.190529 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.190543 kubelet[3386]: W0513 23:44:28.190539 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.190609 kubelet[3386]: E0513 23:44:28.190555 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.190736 kubelet[3386]: E0513 23:44:28.190719 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.190736 kubelet[3386]: W0513 23:44:28.190733 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.190796 kubelet[3386]: E0513 23:44:28.190746 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.191059 kubelet[3386]: E0513 23:44:28.191039 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.191059 kubelet[3386]: W0513 23:44:28.191055 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.191128 kubelet[3386]: E0513 23:44:28.191066 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.191267 kubelet[3386]: E0513 23:44:28.191250 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.191267 kubelet[3386]: W0513 23:44:28.191266 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.191333 kubelet[3386]: E0513 23:44:28.191284 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.191471 kubelet[3386]: E0513 23:44:28.191445 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.191471 kubelet[3386]: W0513 23:44:28.191463 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.191566 kubelet[3386]: E0513 23:44:28.191541 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.191671 kubelet[3386]: E0513 23:44:28.191656 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.191671 kubelet[3386]: W0513 23:44:28.191668 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.191718 kubelet[3386]: E0513 23:44:28.191677 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.191820 kubelet[3386]: E0513 23:44:28.191805 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.191820 kubelet[3386]: W0513 23:44:28.191817 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.191871 kubelet[3386]: E0513 23:44:28.191825 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.192010 kubelet[3386]: E0513 23:44:28.191993 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.192010 kubelet[3386]: W0513 23:44:28.192006 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.192072 kubelet[3386]: E0513 23:44:28.192015 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.292721 kubelet[3386]: E0513 23:44:28.292685 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.293009 kubelet[3386]: W0513 23:44:28.292871 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.293009 kubelet[3386]: E0513 23:44:28.292900 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.293158 kubelet[3386]: E0513 23:44:28.293147 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.293294 kubelet[3386]: W0513 23:44:28.293203 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.293294 kubelet[3386]: E0513 23:44:28.293239 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.293601 kubelet[3386]: E0513 23:44:28.293499 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.293601 kubelet[3386]: W0513 23:44:28.293511 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.293601 kubelet[3386]: E0513 23:44:28.293521 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.293838 kubelet[3386]: E0513 23:44:28.293773 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.293838 kubelet[3386]: W0513 23:44:28.293783 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.293838 kubelet[3386]: E0513 23:44:28.293793 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.294159 kubelet[3386]: E0513 23:44:28.294042 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.294159 kubelet[3386]: W0513 23:44:28.294053 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.294159 kubelet[3386]: E0513 23:44:28.294063 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.294445 kubelet[3386]: E0513 23:44:28.294385 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.294445 kubelet[3386]: W0513 23:44:28.294397 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.294445 kubelet[3386]: E0513 23:44:28.294410 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.396268 kubelet[3386]: E0513 23:44:28.396168 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.396268 kubelet[3386]: W0513 23:44:28.396189 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.396268 kubelet[3386]: E0513 23:44:28.396225 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.397095 kubelet[3386]: E0513 23:44:28.396423 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.397095 kubelet[3386]: W0513 23:44:28.396431 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.397095 kubelet[3386]: E0513 23:44:28.396441 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.397095 kubelet[3386]: E0513 23:44:28.396726 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.397095 kubelet[3386]: W0513 23:44:28.396735 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.397095 kubelet[3386]: E0513 23:44:28.396744 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.397095 kubelet[3386]: E0513 23:44:28.396945 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.397095 kubelet[3386]: W0513 23:44:28.396954 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.397095 kubelet[3386]: E0513 23:44:28.396964 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.397370 kubelet[3386]: E0513 23:44:28.397132 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.397370 kubelet[3386]: W0513 23:44:28.397140 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.397370 kubelet[3386]: E0513 23:44:28.397149 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.397370 kubelet[3386]: E0513 23:44:28.397317 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.397370 kubelet[3386]: W0513 23:44:28.397326 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.397370 kubelet[3386]: E0513 23:44:28.397334 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.471737 kubelet[3386]: E0513 23:44:28.469244 3386 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 13 23:44:28.471737 kubelet[3386]: E0513 23:44:28.469342 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-tigera-ca-bundle podName:e0ad5a15-3ec0-4420-8855-56d5eb85ddf6 nodeName:}" failed. No retries permitted until 2025-05-13 23:44:28.969315222 +0000 UTC m=+16.266251831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-tigera-ca-bundle") pod "calico-typha-67c7dd6748-96s67" (UID: "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6") : failed to sync configmap cache: timed out waiting for the condition May 13 23:44:28.471737 kubelet[3386]: E0513 23:44:28.469541 3386 secret.go:188] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 13 23:44:28.471737 kubelet[3386]: E0513 23:44:28.469589 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-typha-certs podName:e0ad5a15-3ec0-4420-8855-56d5eb85ddf6 nodeName:}" failed. No retries permitted until 2025-05-13 23:44:28.969566823 +0000 UTC m=+16.266503392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-typha-certs") pod "calico-typha-67c7dd6748-96s67" (UID: "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6") : failed to sync secret cache: timed out waiting for the condition May 13 23:44:28.500853 kubelet[3386]: E0513 23:44:28.500713 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.500853 kubelet[3386]: W0513 23:44:28.500746 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.500853 kubelet[3386]: E0513 23:44:28.500764 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.501183 kubelet[3386]: E0513 23:44:28.501171 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.501387 kubelet[3386]: W0513 23:44:28.501271 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.501387 kubelet[3386]: E0513 23:44:28.501296 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.501647 kubelet[3386]: E0513 23:44:28.501546 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.501647 kubelet[3386]: W0513 23:44:28.501557 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.501647 kubelet[3386]: E0513 23:44:28.501567 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.501852 kubelet[3386]: E0513 23:44:28.501841 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.502003 kubelet[3386]: W0513 23:44:28.501905 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.502003 kubelet[3386]: E0513 23:44:28.501926 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.503264 kubelet[3386]: E0513 23:44:28.502533 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.503264 kubelet[3386]: W0513 23:44:28.502554 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.503264 kubelet[3386]: E0513 23:44:28.502567 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.503264 kubelet[3386]: E0513 23:44:28.502764 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.503264 kubelet[3386]: W0513 23:44:28.502772 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.503264 kubelet[3386]: E0513 23:44:28.502781 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.503511 kubelet[3386]: E0513 23:44:28.503496 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.503569 kubelet[3386]: W0513 23:44:28.503557 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.503621 kubelet[3386]: E0513 23:44:28.503611 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.603262 kubelet[3386]: E0513 23:44:28.603231 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.603262 kubelet[3386]: W0513 23:44:28.603253 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.603262 kubelet[3386]: E0513 23:44:28.603271 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.603460 kubelet[3386]: E0513 23:44:28.603440 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.603460 kubelet[3386]: W0513 23:44:28.603456 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.603512 kubelet[3386]: E0513 23:44:28.603466 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.603631 kubelet[3386]: E0513 23:44:28.603615 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.603654 kubelet[3386]: W0513 23:44:28.603630 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.603654 kubelet[3386]: E0513 23:44:28.603640 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.603825 kubelet[3386]: E0513 23:44:28.603807 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.603825 kubelet[3386]: W0513 23:44:28.603820 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.603884 kubelet[3386]: E0513 23:44:28.603829 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.604018 kubelet[3386]: E0513 23:44:28.603998 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.604018 kubelet[3386]: W0513 23:44:28.604013 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.604071 kubelet[3386]: E0513 23:44:28.604023 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.633333 kubelet[3386]: E0513 23:44:28.633301 3386 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 23:44:28.633333 kubelet[3386]: E0513 23:44:28.633332 3386 projected.go:194] Error preparing data for projected volume kube-api-access-pgtl8 for pod calico-system/calico-typha-67c7dd6748-96s67: failed to sync configmap cache: timed out waiting for the condition May 13 23:44:28.633481 kubelet[3386]: E0513 23:44:28.633395 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-kube-api-access-pgtl8 podName:e0ad5a15-3ec0-4420-8855-56d5eb85ddf6 nodeName:}" failed. No retries permitted until 2025-05-13 23:44:29.133375238 +0000 UTC m=+16.430311807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pgtl8" (UniqueName: "kubernetes.io/projected/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-kube-api-access-pgtl8") pod "calico-typha-67c7dd6748-96s67" (UID: "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6") : failed to sync configmap cache: timed out waiting for the condition May 13 23:44:28.705371 kubelet[3386]: E0513 23:44:28.705270 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.705371 kubelet[3386]: W0513 23:44:28.705293 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.705371 kubelet[3386]: E0513 23:44:28.705312 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.705517 kubelet[3386]: E0513 23:44:28.705483 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.705517 kubelet[3386]: W0513 23:44:28.705491 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.705517 kubelet[3386]: E0513 23:44:28.705499 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.705654 kubelet[3386]: E0513 23:44:28.705638 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.705654 kubelet[3386]: W0513 23:44:28.705649 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.705707 kubelet[3386]: E0513 23:44:28.705658 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.705801 kubelet[3386]: E0513 23:44:28.705786 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.705801 kubelet[3386]: W0513 23:44:28.705798 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.705849 kubelet[3386]: E0513 23:44:28.705807 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.706179 kubelet[3386]: E0513 23:44:28.706159 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.706179 kubelet[3386]: W0513 23:44:28.706174 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.706250 kubelet[3386]: E0513 23:44:28.706185 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.807658 kubelet[3386]: E0513 23:44:28.807628 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.807658 kubelet[3386]: W0513 23:44:28.807651 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.807789 kubelet[3386]: E0513 23:44:28.807672 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.808350 kubelet[3386]: E0513 23:44:28.807855 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.808350 kubelet[3386]: W0513 23:44:28.807870 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.808350 kubelet[3386]: E0513 23:44:28.807880 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.808350 kubelet[3386]: E0513 23:44:28.808116 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.808350 kubelet[3386]: W0513 23:44:28.808127 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.808350 kubelet[3386]: E0513 23:44:28.808137 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.808483 kubelet[3386]: E0513 23:44:28.808382 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.808483 kubelet[3386]: W0513 23:44:28.808392 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.808483 kubelet[3386]: E0513 23:44:28.808401 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.808611 kubelet[3386]: E0513 23:44:28.808591 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.808611 kubelet[3386]: W0513 23:44:28.808605 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.808660 kubelet[3386]: E0513 23:44:28.808615 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.909526 kubelet[3386]: E0513 23:44:28.909492 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.909526 kubelet[3386]: W0513 23:44:28.909516 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.909526 kubelet[3386]: E0513 23:44:28.909535 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.909741 kubelet[3386]: E0513 23:44:28.909720 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.909741 kubelet[3386]: W0513 23:44:28.909733 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.909741 kubelet[3386]: E0513 23:44:28.909743 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.909983 kubelet[3386]: E0513 23:44:28.909964 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.909983 kubelet[3386]: W0513 23:44:28.909979 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.910031 kubelet[3386]: E0513 23:44:28.909988 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.910249 kubelet[3386]: E0513 23:44:28.910229 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.910249 kubelet[3386]: W0513 23:44:28.910244 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.910298 kubelet[3386]: E0513 23:44:28.910253 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.910452 kubelet[3386]: E0513 23:44:28.910431 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.910452 kubelet[3386]: W0513 23:44:28.910444 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.910508 kubelet[3386]: E0513 23:44:28.910452 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.919604 kubelet[3386]: E0513 23:44:28.919572 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.919604 kubelet[3386]: W0513 23:44:28.919591 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.919604 kubelet[3386]: E0513 23:44:28.919605 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:28.924346 kubelet[3386]: E0513 23:44:28.922584 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:28.924346 kubelet[3386]: W0513 23:44:28.922603 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:28.924346 kubelet[3386]: E0513 23:44:28.922617 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.011935 kubelet[3386]: E0513 23:44:29.010925 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.011935 kubelet[3386]: W0513 23:44:29.010949 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.011935 kubelet[3386]: E0513 23:44:29.010969 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.011935 kubelet[3386]: E0513 23:44:29.011245 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.011935 kubelet[3386]: W0513 23:44:29.011255 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.011935 kubelet[3386]: E0513 23:44:29.011394 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.011935 kubelet[3386]: E0513 23:44:29.011495 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.011935 kubelet[3386]: W0513 23:44:29.011503 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.011935 kubelet[3386]: E0513 23:44:29.011517 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.011935 kubelet[3386]: E0513 23:44:29.011702 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.013482 kubelet[3386]: W0513 23:44:29.011710 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.013482 kubelet[3386]: E0513 23:44:29.011791 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.013482 kubelet[3386]: E0513 23:44:29.011911 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.013482 kubelet[3386]: W0513 23:44:29.011919 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.013482 kubelet[3386]: E0513 23:44:29.011935 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.013482 kubelet[3386]: E0513 23:44:29.012246 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.013482 kubelet[3386]: W0513 23:44:29.012256 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.013482 kubelet[3386]: E0513 23:44:29.012273 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.013482 kubelet[3386]: E0513 23:44:29.012472 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.013482 kubelet[3386]: W0513 23:44:29.012480 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.013671 kubelet[3386]: E0513 23:44:29.012495 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.013671 kubelet[3386]: E0513 23:44:29.012778 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.013671 kubelet[3386]: W0513 23:44:29.012787 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.013671 kubelet[3386]: E0513 23:44:29.012986 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.013671 kubelet[3386]: E0513 23:44:29.013319 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.013671 kubelet[3386]: W0513 23:44:29.013329 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.013671 kubelet[3386]: E0513 23:44:29.013340 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.013806 kubelet[3386]: E0513 23:44:29.013781 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.013806 kubelet[3386]: W0513 23:44:29.013794 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.013848 kubelet[3386]: E0513 23:44:29.013806 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.014267 kubelet[3386]: E0513 23:44:29.014245 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.014267 kubelet[3386]: W0513 23:44:29.014264 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.014322 kubelet[3386]: E0513 23:44:29.014277 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.016706 kubelet[3386]: E0513 23:44:29.016685 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.016745 kubelet[3386]: W0513 23:44:29.016713 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.016745 kubelet[3386]: E0513 23:44:29.016727 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.019587 kubelet[3386]: E0513 23:44:29.019306 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.019587 kubelet[3386]: W0513 23:44:29.019324 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.019587 kubelet[3386]: E0513 23:44:29.019337 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.028087 containerd[1765]: time="2025-05-13T23:44:29.028050235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4pt9x,Uid:56848f67-cdcb-434e-b946-495c63c2981d,Namespace:calico-system,Attempt:0,}" May 13 23:44:29.113704 kubelet[3386]: E0513 23:44:29.113610 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.113704 kubelet[3386]: W0513 23:44:29.113631 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.113704 kubelet[3386]: E0513 23:44:29.113661 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.215017 kubelet[3386]: E0513 23:44:29.214972 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.215017 kubelet[3386]: W0513 23:44:29.215001 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.215017 kubelet[3386]: E0513 23:44:29.215021 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.216014 kubelet[3386]: E0513 23:44:29.215820 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.216014 kubelet[3386]: W0513 23:44:29.215833 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.216014 kubelet[3386]: E0513 23:44:29.215846 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.216187 kubelet[3386]: E0513 23:44:29.216164 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.216187 kubelet[3386]: W0513 23:44:29.216178 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.216297 kubelet[3386]: E0513 23:44:29.216189 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.216552 kubelet[3386]: E0513 23:44:29.216532 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.216552 kubelet[3386]: W0513 23:44:29.216547 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.216810 kubelet[3386]: E0513 23:44:29.216657 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.217006 kubelet[3386]: E0513 23:44:29.216977 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.217006 kubelet[3386]: W0513 23:44:29.216992 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.217006 kubelet[3386]: E0513 23:44:29.217003 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.228247 kubelet[3386]: E0513 23:44:29.228147 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:44:29.228247 kubelet[3386]: W0513 23:44:29.228169 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:44:29.228247 kubelet[3386]: E0513 23:44:29.228188 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:44:29.327666 containerd[1765]: time="2025-05-13T23:44:29.327434428Z" level=info msg="connecting to shim 060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc" address="unix:///run/containerd/s/c667d15a2a2391ceb3a0f10b823a646ef92695591d8e09abc116afa3b208383c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:29.365402 systemd[1]: Started cri-containerd-060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc.scope - libcontainer container 060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc. May 13 23:44:29.404966 containerd[1765]: time="2025-05-13T23:44:29.404916970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4pt9x,Uid:56848f67-cdcb-434e-b946-495c63c2981d,Namespace:calico-system,Attempt:0,} returns sandbox id \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\"" May 13 23:44:29.407657 containerd[1765]: time="2025-05-13T23:44:29.407457293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:44:29.442152 containerd[1765]: time="2025-05-13T23:44:29.442094619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67c7dd6748-96s67,Uid:e0ad5a15-3ec0-4420-8855-56d5eb85ddf6,Namespace:calico-system,Attempt:0,}" May 13 23:44:29.738529 containerd[1765]: time="2025-05-13T23:44:29.738444767Z" level=info msg="connecting to shim 97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182" address="unix:///run/containerd/s/3e5dc52a5f2c9498ebde7b3f21fd9990aed97b783e78eb7af1298373e87bb1ee" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:29.760386 systemd[1]: Started cri-containerd-97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182.scope - libcontainer container 97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182. May 13 23:44:29.818259 kubelet[3386]: E0513 23:44:29.817844 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:29.830492 containerd[1765]: time="2025-05-13T23:44:29.830377648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67c7dd6748-96s67,Uid:e0ad5a15-3ec0-4420-8855-56d5eb85ddf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\"" May 13 23:44:31.345350 containerd[1765]: time="2025-05-13T23:44:31.345304257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:31.348319 containerd[1765]: time="2025-05-13T23:44:31.348270461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 23:44:31.355334 containerd[1765]: time="2025-05-13T23:44:31.355304150Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:31.363338 containerd[1765]: time="2025-05-13T23:44:31.363283241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:31.364194 containerd[1765]: time="2025-05-13T23:44:31.363804242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.956208349s" May 13 23:44:31.364194 containerd[1765]: time="2025-05-13T23:44:31.363837042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 23:44:31.364951 containerd[1765]: time="2025-05-13T23:44:31.364908484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:44:31.366713 containerd[1765]: time="2025-05-13T23:44:31.366679646Z" level=info msg="CreateContainer within sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:44:31.417423 containerd[1765]: time="2025-05-13T23:44:31.417332075Z" level=info msg="Container a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:31.439876 containerd[1765]: time="2025-05-13T23:44:31.439760186Z" level=info msg="CreateContainer within sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\"" May 13 23:44:31.440230 containerd[1765]: time="2025-05-13T23:44:31.440185586Z" level=info msg="StartContainer for \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\"" May 13 23:44:31.441624 containerd[1765]: time="2025-05-13T23:44:31.441591428Z" level=info msg="connecting to shim a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa" address="unix:///run/containerd/s/c667d15a2a2391ceb3a0f10b823a646ef92695591d8e09abc116afa3b208383c" protocol=ttrpc version=3 May 13 23:44:31.462383 systemd[1]: Started cri-containerd-a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa.scope - libcontainer container a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa. May 13 23:44:31.499930 containerd[1765]: time="2025-05-13T23:44:31.499889188Z" level=info msg="StartContainer for \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" returns successfully" May 13 23:44:31.508633 systemd[1]: cri-containerd-a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa.scope: Deactivated successfully. May 13 23:44:31.513076 containerd[1765]: time="2025-05-13T23:44:31.513038845Z" level=info msg="received exit event container_id:\"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" id:\"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" pid:4052 exited_at:{seconds:1747179871 nanos:512340245}" May 13 23:44:31.513345 containerd[1765]: time="2025-05-13T23:44:31.513022165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" id:\"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" pid:4052 exited_at:{seconds:1747179871 nanos:512340245}" May 13 23:44:31.531189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa-rootfs.mount: Deactivated successfully. May 13 23:44:31.815899 kubelet[3386]: E0513 23:44:31.815850 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:33.495254 containerd[1765]: time="2025-05-13T23:44:33.495178908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:33.499441 containerd[1765]: time="2025-05-13T23:44:33.499377874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 23:44:33.506523 containerd[1765]: time="2025-05-13T23:44:33.506472443Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:33.511183 containerd[1765]: time="2025-05-13T23:44:33.511131450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:33.511958 containerd[1765]: time="2025-05-13T23:44:33.511617330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.146668846s" May 13 23:44:33.511958 containerd[1765]: time="2025-05-13T23:44:33.511651171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 23:44:33.512798 containerd[1765]: time="2025-05-13T23:44:33.512740572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:44:33.524940 containerd[1765]: time="2025-05-13T23:44:33.524853109Z" level=info msg="CreateContainer within sandbox \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:44:33.558230 containerd[1765]: time="2025-05-13T23:44:33.556830232Z" level=info msg="Container 198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:33.578751 containerd[1765]: time="2025-05-13T23:44:33.578708622Z" level=info msg="CreateContainer within sandbox \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\"" May 13 23:44:33.579396 containerd[1765]: time="2025-05-13T23:44:33.579367263Z" level=info msg="StartContainer for \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\"" May 13 23:44:33.580481 containerd[1765]: time="2025-05-13T23:44:33.580444384Z" level=info msg="connecting to shim 198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456" address="unix:///run/containerd/s/3e5dc52a5f2c9498ebde7b3f21fd9990aed97b783e78eb7af1298373e87bb1ee" protocol=ttrpc version=3 May 13 23:44:33.600364 systemd[1]: Started cri-containerd-198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456.scope - libcontainer container 198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456. May 13 23:44:33.638276 containerd[1765]: time="2025-05-13T23:44:33.638194903Z" level=info msg="StartContainer for \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" returns successfully" May 13 23:44:33.816119 kubelet[3386]: E0513 23:44:33.815974 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:33.926986 kubelet[3386]: I0513 23:44:33.926006 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67c7dd6748-96s67" podStartSLOduration=3.246935295 podStartE2EDuration="6.925988055s" podCreationTimestamp="2025-05-13 23:44:27 +0000 UTC" firstStartedPulling="2025-05-13 23:44:29.833418452 +0000 UTC m=+17.130355061" lastFinishedPulling="2025-05-13 23:44:33.512471172 +0000 UTC m=+20.809407821" observedRunningTime="2025-05-13 23:44:33.925976375 +0000 UTC m=+21.222913024" watchObservedRunningTime="2025-05-13 23:44:33.925988055 +0000 UTC m=+21.222924624" May 13 23:44:34.912787 kubelet[3386]: I0513 23:44:34.912754 3386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:44:35.817348 kubelet[3386]: E0513 23:44:35.816693 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:36.778576 containerd[1765]: time="2025-05-13T23:44:36.778533345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:36.824372 containerd[1765]: time="2025-05-13T23:44:36.824311887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 23:44:36.831332 containerd[1765]: time="2025-05-13T23:44:36.831276937Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:36.873065 containerd[1765]: time="2025-05-13T23:44:36.872985954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:36.873895 containerd[1765]: time="2025-05-13T23:44:36.873635235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.360860343s" May 13 23:44:36.873895 containerd[1765]: time="2025-05-13T23:44:36.873670395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 23:44:36.876100 containerd[1765]: time="2025-05-13T23:44:36.876067118Z" level=info msg="CreateContainer within sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:44:37.038452 containerd[1765]: time="2025-05-13T23:44:37.036147016Z" level=info msg="Container ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:37.177082 containerd[1765]: time="2025-05-13T23:44:37.176913928Z" level=info msg="CreateContainer within sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\"" May 13 23:44:37.178661 containerd[1765]: time="2025-05-13T23:44:37.177515169Z" level=info msg="StartContainer for \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\"" May 13 23:44:37.179114 containerd[1765]: time="2025-05-13T23:44:37.179089971Z" level=info msg="connecting to shim ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0" address="unix:///run/containerd/s/c667d15a2a2391ceb3a0f10b823a646ef92695591d8e09abc116afa3b208383c" protocol=ttrpc version=3 May 13 23:44:37.198353 systemd[1]: Started cri-containerd-ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0.scope - libcontainer container ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0. May 13 23:44:37.236183 containerd[1765]: time="2025-05-13T23:44:37.236149729Z" level=info msg="StartContainer for \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" returns successfully" May 13 23:44:37.817245 kubelet[3386]: E0513 23:44:37.816658 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:39.816217 kubelet[3386]: E0513 23:44:39.816164 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:41.817152 kubelet[3386]: E0513 23:44:41.816828 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:43.815977 kubelet[3386]: E0513 23:44:43.815912 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:45.815791 kubelet[3386]: E0513 23:44:45.815739 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:47.816323 kubelet[3386]: E0513 23:44:47.816238 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:49.404284 containerd[1765]: time="2025-05-13T23:44:49.404152134Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 13 23:44:49.406385 systemd[1]: cri-containerd-ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0.scope: Deactivated successfully. May 13 23:44:49.406693 systemd[1]: cri-containerd-ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0.scope: Consumed 354ms CPU time, 172.7M memory peak, 150.3M written to disk. May 13 23:44:49.409801 containerd[1765]: time="2025-05-13T23:44:49.409655021Z" level=info msg="received exit event container_id:\"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" id:\"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" pid:4148 exited_at:{seconds:1747179889 nanos:409466661}" May 13 23:44:49.410470 containerd[1765]: time="2025-05-13T23:44:49.409978222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" id:\"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" pid:4148 exited_at:{seconds:1747179889 nanos:409466661}" May 13 23:44:49.431925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0-rootfs.mount: Deactivated successfully. May 13 23:44:49.508596 kubelet[3386]: I0513 23:44:49.507067 3386 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:44:51.870046 kubelet[3386]: I0513 23:44:49.636131 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsrl\" (UniqueName: \"kubernetes.io/projected/447c5b24-e035-4d61-a6dc-6b184c49bb3e-kube-api-access-fpsrl\") pod \"calico-kube-controllers-7dcd59c5cd-jhn7j\" (UID: \"447c5b24-e035-4d61-a6dc-6b184c49bb3e\") " pod="calico-system/calico-kube-controllers-7dcd59c5cd-jhn7j" May 13 23:44:51.870046 kubelet[3386]: I0513 23:44:49.636192 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54-config-volume\") pod \"coredns-6f6b679f8f-k92sd\" (UID: \"cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54\") " pod="kube-system/coredns-6f6b679f8f-k92sd" May 13 23:44:51.870046 kubelet[3386]: I0513 23:44:49.636227 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p8w\" (UniqueName: \"kubernetes.io/projected/cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54-kube-api-access-t2p8w\") pod \"coredns-6f6b679f8f-k92sd\" (UID: \"cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54\") " pod="kube-system/coredns-6f6b679f8f-k92sd" May 13 23:44:51.870046 kubelet[3386]: I0513 23:44:49.636284 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zl8\" (UniqueName: \"kubernetes.io/projected/7e3261f6-25de-4f30-bf75-6f786aec4e94-kube-api-access-52zl8\") pod \"coredns-6f6b679f8f-qtclm\" (UID: \"7e3261f6-25de-4f30-bf75-6f786aec4e94\") " pod="kube-system/coredns-6f6b679f8f-qtclm" May 13 23:44:51.870046 kubelet[3386]: I0513 23:44:49.636375 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b109c94-4423-43d5-babf-1f349408342e-calico-apiserver-certs\") pod \"calico-apiserver-6c659f749-ltpvp\" (UID: \"8b109c94-4423-43d5-babf-1f349408342e\") " pod="calico-apiserver/calico-apiserver-6c659f749-ltpvp" May 13 23:44:49.555347 systemd[1]: Created slice kubepods-burstable-podcfdf6fc4_3589_4a87_a2a9_c9b41a4fad54.slice - libcontainer container kubepods-burstable-podcfdf6fc4_3589_4a87_a2a9_c9b41a4fad54.slice. May 13 23:44:51.871993 kubelet[3386]: I0513 23:44:49.636423 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b116480f-b995-4242-8c60-f97a56d1d6d1-calico-apiserver-certs\") pod \"calico-apiserver-77478f65f8-x25bh\" (UID: \"b116480f-b995-4242-8c60-f97a56d1d6d1\") " pod="calico-apiserver/calico-apiserver-77478f65f8-x25bh" May 13 23:44:51.871993 kubelet[3386]: I0513 23:44:49.636442 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447c5b24-e035-4d61-a6dc-6b184c49bb3e-tigera-ca-bundle\") pod \"calico-kube-controllers-7dcd59c5cd-jhn7j\" (UID: \"447c5b24-e035-4d61-a6dc-6b184c49bb3e\") " pod="calico-system/calico-kube-controllers-7dcd59c5cd-jhn7j" May 13 23:44:51.871993 kubelet[3386]: I0513 23:44:49.636489 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a6acda27-adaa-464d-9d6d-5b299eb91453-calico-apiserver-certs\") pod \"calico-apiserver-6c659f749-8p7bb\" (UID: \"a6acda27-adaa-464d-9d6d-5b299eb91453\") " pod="calico-apiserver/calico-apiserver-6c659f749-8p7bb" May 13 23:44:51.871993 kubelet[3386]: I0513 23:44:49.636510 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljzj\" (UniqueName: \"kubernetes.io/projected/a6acda27-adaa-464d-9d6d-5b299eb91453-kube-api-access-pljzj\") pod \"calico-apiserver-6c659f749-8p7bb\" (UID: \"a6acda27-adaa-464d-9d6d-5b299eb91453\") " pod="calico-apiserver/calico-apiserver-6c659f749-8p7bb" May 13 23:44:51.871993 kubelet[3386]: I0513 23:44:49.636529 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5lt\" (UniqueName: \"kubernetes.io/projected/8b109c94-4423-43d5-babf-1f349408342e-kube-api-access-tp5lt\") pod \"calico-apiserver-6c659f749-ltpvp\" (UID: \"8b109c94-4423-43d5-babf-1f349408342e\") " pod="calico-apiserver/calico-apiserver-6c659f749-ltpvp" May 13 23:44:49.563132 systemd[1]: Created slice kubepods-besteffort-podb116480f_b995_4242_8c60_f97a56d1d6d1.slice - libcontainer container kubepods-besteffort-podb116480f_b995_4242_8c60_f97a56d1d6d1.slice. May 13 23:44:51.872146 kubelet[3386]: I0513 23:44:49.636545 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e3261f6-25de-4f30-bf75-6f786aec4e94-config-volume\") pod \"coredns-6f6b679f8f-qtclm\" (UID: \"7e3261f6-25de-4f30-bf75-6f786aec4e94\") " pod="kube-system/coredns-6f6b679f8f-qtclm" May 13 23:44:51.872146 kubelet[3386]: I0513 23:44:49.636570 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2qbx\" (UniqueName: \"kubernetes.io/projected/b116480f-b995-4242-8c60-f97a56d1d6d1-kube-api-access-b2qbx\") pod \"calico-apiserver-77478f65f8-x25bh\" (UID: \"b116480f-b995-4242-8c60-f97a56d1d6d1\") " pod="calico-apiserver/calico-apiserver-77478f65f8-x25bh" May 13 23:44:49.575916 systemd[1]: Created slice kubepods-besteffort-pod8b109c94_4423_43d5_babf_1f349408342e.slice - libcontainer container kubepods-besteffort-pod8b109c94_4423_43d5_babf_1f349408342e.slice. May 13 23:44:49.583681 systemd[1]: Created slice kubepods-besteffort-poda6acda27_adaa_464d_9d6d_5b299eb91453.slice - libcontainer container kubepods-besteffort-poda6acda27_adaa_464d_9d6d_5b299eb91453.slice. May 13 23:44:49.598914 systemd[1]: Created slice kubepods-besteffort-pod447c5b24_e035_4d61_a6dc_6b184c49bb3e.slice - libcontainer container kubepods-besteffort-pod447c5b24_e035_4d61_a6dc_6b184c49bb3e.slice. May 13 23:44:49.605015 systemd[1]: Created slice kubepods-burstable-pod7e3261f6_25de_4f30_bf75_6f786aec4e94.slice - libcontainer container kubepods-burstable-pod7e3261f6_25de_4f30_bf75_6f786aec4e94.slice. May 13 23:44:49.821862 systemd[1]: Created slice kubepods-besteffort-pod74844576_5d15_4959_aa52_fdc4c5736ba8.slice - libcontainer container kubepods-besteffort-pod74844576_5d15_4959_aa52_fdc4c5736ba8.slice. May 13 23:44:51.873315 containerd[1765]: time="2025-05-13T23:44:51.872506512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqzwk,Uid:74844576-5d15-4959-aa52-fdc4c5736ba8,Namespace:calico-system,Attempt:0,}" May 13 23:44:51.887237 kubelet[3386]: E0513 23:44:51.884492 3386 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.069s" May 13 23:44:52.170705 containerd[1765]: time="2025-05-13T23:44:52.170658195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k92sd,Uid:cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54,Namespace:kube-system,Attempt:0,}" May 13 23:44:52.174382 containerd[1765]: time="2025-05-13T23:44:52.174238679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77478f65f8-x25bh,Uid:b116480f-b995-4242-8c60-f97a56d1d6d1,Namespace:calico-apiserver,Attempt:0,}" May 13 23:44:52.174382 containerd[1765]: time="2025-05-13T23:44:52.174236719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qtclm,Uid:7e3261f6-25de-4f30-bf75-6f786aec4e94,Namespace:kube-system,Attempt:0,}" May 13 23:44:52.179043 containerd[1765]: time="2025-05-13T23:44:52.179012846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-8p7bb,Uid:a6acda27-adaa-464d-9d6d-5b299eb91453,Namespace:calico-apiserver,Attempt:0,}" May 13 23:44:52.185743 containerd[1765]: time="2025-05-13T23:44:52.185711175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcd59c5cd-jhn7j,Uid:447c5b24-e035-4d61-a6dc-6b184c49bb3e,Namespace:calico-system,Attempt:0,}" May 13 23:44:52.199320 containerd[1765]: time="2025-05-13T23:44:52.199263593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-ltpvp,Uid:8b109c94-4423-43d5-babf-1f349408342e,Namespace:calico-apiserver,Attempt:0,}" May 13 23:44:54.814752 containerd[1765]: time="2025-05-13T23:44:54.814620980Z" level=error msg="Failed to destroy network for sandbox \"e170a47ebd1b661d7b08bb0b4c119e5be91313b98de8feb9b77c705b2a80c70b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.826424 containerd[1765]: time="2025-05-13T23:44:54.824594284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqzwk,Uid:74844576-5d15-4959-aa52-fdc4c5736ba8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e170a47ebd1b661d7b08bb0b4c119e5be91313b98de8feb9b77c705b2a80c70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.826580 kubelet[3386]: E0513 23:44:54.824855 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e170a47ebd1b661d7b08bb0b4c119e5be91313b98de8feb9b77c705b2a80c70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.826580 kubelet[3386]: E0513 23:44:54.824918 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e170a47ebd1b661d7b08bb0b4c119e5be91313b98de8feb9b77c705b2a80c70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kqzwk" May 13 23:44:54.826580 kubelet[3386]: E0513 23:44:54.824936 3386 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e170a47ebd1b661d7b08bb0b4c119e5be91313b98de8feb9b77c705b2a80c70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kqzwk" May 13 23:44:54.826878 kubelet[3386]: E0513 23:44:54.824978 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kqzwk_calico-system(74844576-5d15-4959-aa52-fdc4c5736ba8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kqzwk_calico-system(74844576-5d15-4959-aa52-fdc4c5736ba8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e170a47ebd1b661d7b08bb0b4c119e5be91313b98de8feb9b77c705b2a80c70b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kqzwk" podUID="74844576-5d15-4959-aa52-fdc4c5736ba8" May 13 23:44:54.843741 containerd[1765]: time="2025-05-13T23:44:54.843243448Z" level=error msg="Failed to destroy network for sandbox \"bd85ab034dc50091e4b34824234ec9550c2c686e62676f05d697064391b5590c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.845205 containerd[1765]: time="2025-05-13T23:44:54.845150252Z" level=error msg="Failed to destroy network for sandbox \"63a830bfa9fdb9a25010b536f806cb84d5437e9e426feeb2d44cbe807553e311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.852486 containerd[1765]: time="2025-05-13T23:44:54.852437710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k92sd,Uid:cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd85ab034dc50091e4b34824234ec9550c2c686e62676f05d697064391b5590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.853735 kubelet[3386]: E0513 23:44:54.853691 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd85ab034dc50091e4b34824234ec9550c2c686e62676f05d697064391b5590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.853834 kubelet[3386]: E0513 23:44:54.853760 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd85ab034dc50091e4b34824234ec9550c2c686e62676f05d697064391b5590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k92sd" May 13 23:44:54.853834 kubelet[3386]: E0513 23:44:54.853781 3386 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd85ab034dc50091e4b34824234ec9550c2c686e62676f05d697064391b5590c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k92sd" May 13 23:44:54.855418 kubelet[3386]: E0513 23:44:54.854437 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k92sd_kube-system(cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k92sd_kube-system(cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd85ab034dc50091e4b34824234ec9550c2c686e62676f05d697064391b5590c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k92sd" podUID="cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54" May 13 23:44:54.857402 containerd[1765]: time="2025-05-13T23:44:54.857366722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77478f65f8-x25bh,Uid:b116480f-b995-4242-8c60-f97a56d1d6d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a830bfa9fdb9a25010b536f806cb84d5437e9e426feeb2d44cbe807553e311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.857704 kubelet[3386]: E0513 23:44:54.857680 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a830bfa9fdb9a25010b536f806cb84d5437e9e426feeb2d44cbe807553e311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.857920 kubelet[3386]: E0513 23:44:54.857899 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a830bfa9fdb9a25010b536f806cb84d5437e9e426feeb2d44cbe807553e311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77478f65f8-x25bh" May 13 23:44:54.858084 kubelet[3386]: E0513 23:44:54.857988 3386 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a830bfa9fdb9a25010b536f806cb84d5437e9e426feeb2d44cbe807553e311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77478f65f8-x25bh" May 13 23:44:54.858084 kubelet[3386]: E0513 23:44:54.858039 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77478f65f8-x25bh_calico-apiserver(b116480f-b995-4242-8c60-f97a56d1d6d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77478f65f8-x25bh_calico-apiserver(b116480f-b995-4242-8c60-f97a56d1d6d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63a830bfa9fdb9a25010b536f806cb84d5437e9e426feeb2d44cbe807553e311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77478f65f8-x25bh" podUID="b116480f-b995-4242-8c60-f97a56d1d6d1" May 13 23:44:54.879784 containerd[1765]: time="2025-05-13T23:44:54.879646575Z" level=error msg="Failed to destroy network for sandbox \"8ee082b4d13c68b75135fc6c6506daaae7e7c57a3b2af82c437bfeda7cb45dbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.880499 containerd[1765]: time="2025-05-13T23:44:54.880403056Z" level=error msg="Failed to destroy network for sandbox \"40b3dd100de157e583d2cffadce35b5a10ed6a075319c1a0302a7e6f1efd08bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.884542 containerd[1765]: time="2025-05-13T23:44:54.884364706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qtclm,Uid:7e3261f6-25de-4f30-bf75-6f786aec4e94,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee082b4d13c68b75135fc6c6506daaae7e7c57a3b2af82c437bfeda7cb45dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.886470 kubelet[3386]: E0513 23:44:54.884897 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee082b4d13c68b75135fc6c6506daaae7e7c57a3b2af82c437bfeda7cb45dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.886470 kubelet[3386]: E0513 23:44:54.884958 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee082b4d13c68b75135fc6c6506daaae7e7c57a3b2af82c437bfeda7cb45dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qtclm" May 13 23:44:54.886470 kubelet[3386]: E0513 23:44:54.884976 3386 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee082b4d13c68b75135fc6c6506daaae7e7c57a3b2af82c437bfeda7cb45dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qtclm" May 13 23:44:54.886603 kubelet[3386]: E0513 23:44:54.885013 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qtclm_kube-system(7e3261f6-25de-4f30-bf75-6f786aec4e94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qtclm_kube-system(7e3261f6-25de-4f30-bf75-6f786aec4e94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ee082b4d13c68b75135fc6c6506daaae7e7c57a3b2af82c437bfeda7cb45dbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qtclm" podUID="7e3261f6-25de-4f30-bf75-6f786aec4e94" May 13 23:44:54.890248 containerd[1765]: time="2025-05-13T23:44:54.890194080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-8p7bb,Uid:a6acda27-adaa-464d-9d6d-5b299eb91453,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b3dd100de157e583d2cffadce35b5a10ed6a075319c1a0302a7e6f1efd08bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.890558 kubelet[3386]: E0513 23:44:54.890519 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b3dd100de157e583d2cffadce35b5a10ed6a075319c1a0302a7e6f1efd08bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.890621 kubelet[3386]: E0513 23:44:54.890569 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b3dd100de157e583d2cffadce35b5a10ed6a075319c1a0302a7e6f1efd08bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c659f749-8p7bb" May 13 23:44:54.890621 kubelet[3386]: E0513 23:44:54.890587 3386 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b3dd100de157e583d2cffadce35b5a10ed6a075319c1a0302a7e6f1efd08bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c659f749-8p7bb" May 13 23:44:54.890678 kubelet[3386]: E0513 23:44:54.890624 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c659f749-8p7bb_calico-apiserver(a6acda27-adaa-464d-9d6d-5b299eb91453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c659f749-8p7bb_calico-apiserver(a6acda27-adaa-464d-9d6d-5b299eb91453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40b3dd100de157e583d2cffadce35b5a10ed6a075319c1a0302a7e6f1efd08bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c659f749-8p7bb" podUID="a6acda27-adaa-464d-9d6d-5b299eb91453" May 13 23:44:54.891290 containerd[1765]: time="2025-05-13T23:44:54.891148882Z" level=error msg="Failed to destroy network for sandbox \"8835bf1e024bd89ade1c1d854ef493e8b42ff7a97bfbe464cad62872caa6a06f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.893414 containerd[1765]: time="2025-05-13T23:44:54.893326327Z" level=error msg="Failed to destroy network for sandbox \"ec881b74b2105c220e6b093cc4f22cd3a1e35250b55f8aa5679607f7db36590b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.899236 containerd[1765]: time="2025-05-13T23:44:54.899183421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-ltpvp,Uid:8b109c94-4423-43d5-babf-1f349408342e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8835bf1e024bd89ade1c1d854ef493e8b42ff7a97bfbe464cad62872caa6a06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.899724 kubelet[3386]: E0513 23:44:54.899589 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8835bf1e024bd89ade1c1d854ef493e8b42ff7a97bfbe464cad62872caa6a06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.899724 kubelet[3386]: E0513 23:44:54.899651 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8835bf1e024bd89ade1c1d854ef493e8b42ff7a97bfbe464cad62872caa6a06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c659f749-ltpvp" May 13 23:44:54.899724 kubelet[3386]: E0513 23:44:54.899668 3386 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8835bf1e024bd89ade1c1d854ef493e8b42ff7a97bfbe464cad62872caa6a06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c659f749-ltpvp" May 13 23:44:54.900010 kubelet[3386]: E0513 23:44:54.899698 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c659f749-ltpvp_calico-apiserver(8b109c94-4423-43d5-babf-1f349408342e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c659f749-ltpvp_calico-apiserver(8b109c94-4423-43d5-babf-1f349408342e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8835bf1e024bd89ade1c1d854ef493e8b42ff7a97bfbe464cad62872caa6a06f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c659f749-ltpvp" podUID="8b109c94-4423-43d5-babf-1f349408342e" May 13 23:44:54.909076 containerd[1765]: time="2025-05-13T23:44:54.908941884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcd59c5cd-jhn7j,Uid:447c5b24-e035-4d61-a6dc-6b184c49bb3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec881b74b2105c220e6b093cc4f22cd3a1e35250b55f8aa5679607f7db36590b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.909622 kubelet[3386]: E0513 23:44:54.909329 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec881b74b2105c220e6b093cc4f22cd3a1e35250b55f8aa5679607f7db36590b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:44:54.909622 kubelet[3386]: E0513 23:44:54.909379 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec881b74b2105c220e6b093cc4f22cd3a1e35250b55f8aa5679607f7db36590b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dcd59c5cd-jhn7j" May 13 23:44:54.909622 kubelet[3386]: E0513 23:44:54.909396 3386 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec881b74b2105c220e6b093cc4f22cd3a1e35250b55f8aa5679607f7db36590b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dcd59c5cd-jhn7j" May 13 23:44:54.909938 kubelet[3386]: E0513 23:44:54.909429 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dcd59c5cd-jhn7j_calico-system(447c5b24-e035-4d61-a6dc-6b184c49bb3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dcd59c5cd-jhn7j_calico-system(447c5b24-e035-4d61-a6dc-6b184c49bb3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec881b74b2105c220e6b093cc4f22cd3a1e35250b55f8aa5679607f7db36590b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dcd59c5cd-jhn7j" podUID="447c5b24-e035-4d61-a6dc-6b184c49bb3e" May 13 23:44:54.957232 containerd[1765]: time="2025-05-13T23:44:54.957157399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:44:55.055700 kubelet[3386]: I0513 23:44:55.055329 3386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:44:55.628841 systemd[1]: run-netns-cni\x2d0147db26\x2daf3b\x2df2ac\x2d49a4\x2d6522f55098f7.mount: Deactivated successfully. May 13 23:44:55.629254 systemd[1]: run-netns-cni\x2d4cd61cdb\x2ddfd5\x2db673\x2d2266\x2de94f33519be4.mount: Deactivated successfully. May 13 23:44:55.629444 systemd[1]: run-netns-cni\x2d3fb332a7\x2db034\x2d6375\x2de5ca\x2d7c85091528df.mount: Deactivated successfully. May 13 23:44:55.629571 systemd[1]: run-netns-cni\x2da8ba0960\x2d8228\x2dac83\x2db6b2\x2d968f2b6ae4a2.mount: Deactivated successfully. May 13 23:44:55.629692 systemd[1]: run-netns-cni\x2d1da85d6d\x2da09e\x2ddf17\x2d90c2\x2d5b2f98af54cd.mount: Deactivated successfully. May 13 23:44:59.147997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558728714.mount: Deactivated successfully. May 13 23:44:59.715360 containerd[1765]: time="2025-05-13T23:44:59.715134592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:59.718612 containerd[1765]: time="2025-05-13T23:44:59.718513918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 23:44:59.728109 containerd[1765]: time="2025-05-13T23:44:59.728041852Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:59.735460 containerd[1765]: time="2025-05-13T23:44:59.734462182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:59.735460 containerd[1765]: time="2025-05-13T23:44:59.735005423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.777803984s" May 13 23:44:59.735460 containerd[1765]: time="2025-05-13T23:44:59.735035703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 23:44:59.752362 containerd[1765]: time="2025-05-13T23:44:59.752250169Z" level=info msg="CreateContainer within sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:44:59.800979 containerd[1765]: time="2025-05-13T23:44:59.800930203Z" level=info msg="Container 9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:59.839936 containerd[1765]: time="2025-05-13T23:44:59.839888142Z" level=info msg="CreateContainer within sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\"" May 13 23:44:59.840897 containerd[1765]: time="2025-05-13T23:44:59.840852863Z" level=info msg="StartContainer for \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\"" May 13 23:44:59.842603 containerd[1765]: time="2025-05-13T23:44:59.842557106Z" level=info msg="connecting to shim 9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74" address="unix:///run/containerd/s/c667d15a2a2391ceb3a0f10b823a646ef92695591d8e09abc116afa3b208383c" protocol=ttrpc version=3 May 13 23:44:59.863380 systemd[1]: Started cri-containerd-9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74.scope - libcontainer container 9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74. May 13 23:44:59.907721 containerd[1765]: time="2025-05-13T23:44:59.907676725Z" level=info msg="StartContainer for \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" returns successfully" May 13 23:45:00.000307 kubelet[3386]: I0513 23:44:59.999052 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4pt9x" podStartSLOduration=2.668049289 podStartE2EDuration="32.999036224s" podCreationTimestamp="2025-05-13 23:44:27 +0000 UTC" firstStartedPulling="2025-05-13 23:44:29.406192451 +0000 UTC m=+16.703129060" lastFinishedPulling="2025-05-13 23:44:59.737179386 +0000 UTC m=+47.034115995" observedRunningTime="2025-05-13 23:44:59.998719663 +0000 UTC m=+47.295656232" watchObservedRunningTime="2025-05-13 23:44:59.999036224 +0000 UTC m=+47.295972833" May 13 23:45:00.187478 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:45:00.187602 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:45:00.192971 containerd[1765]: time="2025-05-13T23:45:00.192930358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" id:\"de46c521d95d39890a18a897f78121e7f410433e92df48ebefa638a7001c1619\" pid:4476 exit_status:1 exited_at:{seconds:1747179900 nanos:192412238}" May 13 23:45:01.032957 containerd[1765]: time="2025-05-13T23:45:01.032879395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" id:\"3e397f75dae794dfb34e83bc9d01137dc72e7f9915e99fd5b3604c42934f656c\" pid:4525 exit_status:1 exited_at:{seconds:1747179901 nanos:32485714}" May 13 23:45:01.911247 kernel: bpftool[4658]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:45:02.120925 systemd-networkd[1332]: vxlan.calico: Link UP May 13 23:45:02.120936 systemd-networkd[1332]: vxlan.calico: Gained carrier May 13 23:45:03.480396 systemd-networkd[1332]: vxlan.calico: Gained IPv6LL May 13 23:45:05.816509 containerd[1765]: time="2025-05-13T23:45:05.816334281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-ltpvp,Uid:8b109c94-4423-43d5-babf-1f349408342e,Namespace:calico-apiserver,Attempt:0,}" May 13 23:45:05.816509 containerd[1765]: time="2025-05-13T23:45:05.816334121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qtclm,Uid:7e3261f6-25de-4f30-bf75-6f786aec4e94,Namespace:kube-system,Attempt:0,}" May 13 23:45:06.088413 systemd-networkd[1332]: caliddc393794ae: Link UP May 13 23:45:06.088620 systemd-networkd[1332]: caliddc393794ae: Gained carrier May 13 23:45:06.106258 containerd[1765]: 2025-05-13 23:45:05.897 [INFO][4738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0 coredns-6f6b679f8f- kube-system 7e3261f6-25de-4f30-bf75-6f786aec4e94 717 0 2025-05-13 23:44:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c coredns-6f6b679f8f-qtclm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliddc393794ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-" May 13 23:45:06.106258 containerd[1765]: 2025-05-13 23:45:05.897 [INFO][4738] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" May 13 23:45:06.106258 containerd[1765]: 2025-05-13 23:45:05.936 [INFO][4758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" HandleID="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Workload="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.053 [INFO][4758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" HandleID="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Workload="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031aa40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-13ce75130c", "pod":"coredns-6f6b679f8f-qtclm", "timestamp":"2025-05-13 23:45:05.936134771 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.053 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.053 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.053 [INFO][4758] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.055 [INFO][4758] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.059 [INFO][4758] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.062 [INFO][4758] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.064 [INFO][4758] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.106921 containerd[1765]: 2025-05-13 23:45:06.066 [INFO][4758] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.107169 containerd[1765]: 2025-05-13 23:45:06.066 [INFO][4758] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.107169 containerd[1765]: 2025-05-13 23:45:06.067 [INFO][4758] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8 May 13 23:45:06.107169 containerd[1765]: 2025-05-13 23:45:06.074 [INFO][4758] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.107169 containerd[1765]: 2025-05-13 23:45:06.079 [INFO][4758] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.193/26] block=192.168.119.192/26 handle="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.107169 containerd[1765]: 2025-05-13 23:45:06.079 [INFO][4758] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.193/26] handle="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.107169 containerd[1765]: 2025-05-13 23:45:06.080 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:06.107169 containerd[1765]: 2025-05-13 23:45:06.080 [INFO][4758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.193/26] IPv6=[] ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" HandleID="k8s-pod-network.b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Workload="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" May 13 23:45:06.107333 containerd[1765]: 2025-05-13 23:45:06.082 [INFO][4738] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7e3261f6-25de-4f30-bf75-6f786aec4e94", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"coredns-6f6b679f8f-qtclm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddc393794ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:06.107333 containerd[1765]: 2025-05-13 23:45:06.083 [INFO][4738] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.193/32] ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" May 13 23:45:06.107333 containerd[1765]: 2025-05-13 23:45:06.083 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddc393794ae ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" May 13 23:45:06.107333 containerd[1765]: 2025-05-13 23:45:06.088 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" May 13 23:45:06.107333 containerd[1765]: 2025-05-13 23:45:06.089 [INFO][4738] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7e3261f6-25de-4f30-bf75-6f786aec4e94", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8", Pod:"coredns-6f6b679f8f-qtclm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddc393794ae", MAC:"e6:f8:92:bb:ed:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:06.107333 containerd[1765]: 2025-05-13 23:45:06.102 [INFO][4738] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" Namespace="kube-system" Pod="coredns-6f6b679f8f-qtclm" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--qtclm-eth0" May 13 23:45:06.193327 systemd-networkd[1332]: cali87b07a9e57b: Link UP May 13 23:45:06.193901 systemd-networkd[1332]: cali87b07a9e57b: Gained carrier May 13 23:45:06.208033 containerd[1765]: time="2025-05-13T23:45:06.207986008Z" level=info msg="connecting to shim b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8" address="unix:///run/containerd/s/0bd6ba2c92cbf321c7f9e89e5d305037b745cb5429dfd9e05fb46f9a6ded38b0" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:05.898 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0 calico-apiserver-6c659f749- calico-apiserver 8b109c94-4423-43d5-babf-1f349408342e 719 0 2025-05-13 23:44:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c659f749 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c calico-apiserver-6c659f749-ltpvp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87b07a9e57b [] []}} ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:05.898 [INFO][4734] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:05.936 [INFO][4760] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.055 [INFO][4760] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-13ce75130c", "pod":"calico-apiserver-6c659f749-ltpvp", "timestamp":"2025-05-13 23:45:05.936329852 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.055 [INFO][4760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.079 [INFO][4760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.080 [INFO][4760] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.156 [INFO][4760] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.160 [INFO][4760] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.164 [INFO][4760] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.166 [INFO][4760] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.168 [INFO][4760] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.168 [INFO][4760] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.171 [INFO][4760] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117 May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.178 [INFO][4760] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.186 [INFO][4760] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.194/26] block=192.168.119.192/26 handle="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.187 [INFO][4760] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.194/26] handle="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.187 [INFO][4760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:06.227585 containerd[1765]: 2025-05-13 23:45:06.187 [INFO][4760] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.194/26] IPv6=[] ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:06.228570 containerd[1765]: 2025-05-13 23:45:06.189 [INFO][4734] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0", GenerateName:"calico-apiserver-6c659f749-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b109c94-4423-43d5-babf-1f349408342e", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c659f749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"calico-apiserver-6c659f749-ltpvp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87b07a9e57b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:06.228570 containerd[1765]: 2025-05-13 23:45:06.189 [INFO][4734] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.194/32] ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:06.228570 containerd[1765]: 2025-05-13 23:45:06.189 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87b07a9e57b ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:06.228570 containerd[1765]: 2025-05-13 23:45:06.194 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:06.228570 containerd[1765]: 2025-05-13 23:45:06.195 [INFO][4734] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0", GenerateName:"calico-apiserver-6c659f749-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b109c94-4423-43d5-babf-1f349408342e", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c659f749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117", Pod:"calico-apiserver-6c659f749-ltpvp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87b07a9e57b", MAC:"c6:a3:08:cf:d0:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:06.228570 containerd[1765]: 2025-05-13 23:45:06.218 [INFO][4734] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-ltpvp" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:06.268371 systemd[1]: Started cri-containerd-b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8.scope - libcontainer container b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8. May 13 23:45:06.304115 containerd[1765]: time="2025-05-13T23:45:06.303866896Z" level=info msg="connecting to shim 3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" address="unix:///run/containerd/s/1a4bd4c00b37cbfc7d40ef9cd0576ea9feb87bc13cc79ceaa41d4c34b5d6fb2c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:06.328600 containerd[1765]: time="2025-05-13T23:45:06.328492419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qtclm,Uid:7e3261f6-25de-4f30-bf75-6f786aec4e94,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8\"" May 13 23:45:06.333686 containerd[1765]: time="2025-05-13T23:45:06.333646148Z" level=info msg="CreateContainer within sandbox \"b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:45:06.344448 systemd[1]: Started cri-containerd-3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117.scope - libcontainer container 3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117. May 13 23:45:06.380783 containerd[1765]: time="2025-05-13T23:45:06.380746031Z" level=info msg="Container f9a3db49ed315abd2c341e8ad5c12c3bab527fb51e9f71defc04f83a66c20e0a: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:06.390355 containerd[1765]: time="2025-05-13T23:45:06.390323408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-ltpvp,Uid:8b109c94-4423-43d5-babf-1f349408342e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\"" May 13 23:45:06.392004 containerd[1765]: time="2025-05-13T23:45:06.391975171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:45:06.418790 containerd[1765]: time="2025-05-13T23:45:06.418739137Z" level=info msg="CreateContainer within sandbox \"b59be42ac997ce000446d641bcf3def0374346c9dc92ec693bf3cc7c36db0da8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9a3db49ed315abd2c341e8ad5c12c3bab527fb51e9f71defc04f83a66c20e0a\"" May 13 23:45:06.421903 containerd[1765]: time="2025-05-13T23:45:06.421852383Z" level=info msg="StartContainer for \"f9a3db49ed315abd2c341e8ad5c12c3bab527fb51e9f71defc04f83a66c20e0a\"" May 13 23:45:06.423076 containerd[1765]: time="2025-05-13T23:45:06.423042105Z" level=info msg="connecting to shim f9a3db49ed315abd2c341e8ad5c12c3bab527fb51e9f71defc04f83a66c20e0a" address="unix:///run/containerd/s/0bd6ba2c92cbf321c7f9e89e5d305037b745cb5429dfd9e05fb46f9a6ded38b0" protocol=ttrpc version=3 May 13 23:45:06.445363 systemd[1]: Started cri-containerd-f9a3db49ed315abd2c341e8ad5c12c3bab527fb51e9f71defc04f83a66c20e0a.scope - libcontainer container f9a3db49ed315abd2c341e8ad5c12c3bab527fb51e9f71defc04f83a66c20e0a. May 13 23:45:06.475642 containerd[1765]: time="2025-05-13T23:45:06.475597117Z" level=info msg="StartContainer for \"f9a3db49ed315abd2c341e8ad5c12c3bab527fb51e9f71defc04f83a66c20e0a\" returns successfully" May 13 23:45:06.817760 containerd[1765]: time="2025-05-13T23:45:06.817648317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77478f65f8-x25bh,Uid:b116480f-b995-4242-8c60-f97a56d1d6d1,Namespace:calico-apiserver,Attempt:0,}" May 13 23:45:06.937509 systemd-networkd[1332]: cali4be9b7d2edb: Link UP May 13 23:45:06.939150 systemd-networkd[1332]: cali4be9b7d2edb: Gained carrier May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.861 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0 calico-apiserver-77478f65f8- calico-apiserver b116480f-b995-4242-8c60-f97a56d1d6d1 718 0 2025-05-13 23:44:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77478f65f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c calico-apiserver-77478f65f8-x25bh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4be9b7d2edb [] []}} ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.861 [INFO][4925] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.889 [INFO][4933] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" HandleID="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.900 [INFO][4933] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" HandleID="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fc080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-13ce75130c", "pod":"calico-apiserver-77478f65f8-x25bh", "timestamp":"2025-05-13 23:45:06.889581323 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.900 [INFO][4933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.900 [INFO][4933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.900 [INFO][4933] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.901 [INFO][4933] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.905 [INFO][4933] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.909 [INFO][4933] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.910 [INFO][4933] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.913 [INFO][4933] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.913 [INFO][4933] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.917 [INFO][4933] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7 May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.922 [INFO][4933] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.931 [INFO][4933] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.195/26] block=192.168.119.192/26 handle="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.932 [INFO][4933] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.195/26] handle="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.932 [INFO][4933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:06.958811 containerd[1765]: 2025-05-13 23:45:06.932 [INFO][4933] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.195/26] IPv6=[] ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" HandleID="k8s-pod-network.99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" May 13 23:45:06.960416 containerd[1765]: 2025-05-13 23:45:06.934 [INFO][4925] cni-plugin/k8s.go 386: Populated endpoint ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0", GenerateName:"calico-apiserver-77478f65f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"b116480f-b995-4242-8c60-f97a56d1d6d1", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77478f65f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"calico-apiserver-77478f65f8-x25bh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4be9b7d2edb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:06.960416 containerd[1765]: 2025-05-13 23:45:06.934 [INFO][4925] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.195/32] ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" May 13 23:45:06.960416 containerd[1765]: 2025-05-13 23:45:06.934 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4be9b7d2edb ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" May 13 23:45:06.960416 containerd[1765]: 2025-05-13 23:45:06.937 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" May 13 23:45:06.960416 containerd[1765]: 2025-05-13 23:45:06.938 [INFO][4925] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0", GenerateName:"calico-apiserver-77478f65f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"b116480f-b995-4242-8c60-f97a56d1d6d1", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77478f65f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7", Pod:"calico-apiserver-77478f65f8-x25bh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4be9b7d2edb", MAC:"86:dc:c2:e3:8d:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:06.960416 containerd[1765]: 2025-05-13 23:45:06.954 [INFO][4925] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-x25bh" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--x25bh-eth0" May 13 23:45:06.998993 kubelet[3386]: I0513 23:45:06.998681 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qtclm" podStartSLOduration=49.998661874 podStartE2EDuration="49.998661874s" podCreationTimestamp="2025-05-13 23:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:45:06.998340994 +0000 UTC m=+54.295277603" watchObservedRunningTime="2025-05-13 23:45:06.998661874 +0000 UTC m=+54.295598483" May 13 23:45:07.060578 containerd[1765]: time="2025-05-13T23:45:07.059178261Z" level=info msg="connecting to shim 99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7" address="unix:///run/containerd/s/6c41283aba64d008fabee3a063062f9655d5eb7fbff2df6650b3446c4b4dd751" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:07.093437 systemd[1]: Started cri-containerd-99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7.scope - libcontainer container 99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7. May 13 23:45:07.147308 containerd[1765]: time="2025-05-13T23:45:07.146722534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77478f65f8-x25bh,Uid:b116480f-b995-4242-8c60-f97a56d1d6d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7\"" May 13 23:45:07.576387 systemd-networkd[1332]: caliddc393794ae: Gained IPv6LL May 13 23:45:07.817392 containerd[1765]: time="2025-05-13T23:45:07.817126990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k92sd,Uid:cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54,Namespace:kube-system,Attempt:0,}" May 13 23:45:07.824148 containerd[1765]: time="2025-05-13T23:45:07.824098322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcd59c5cd-jhn7j,Uid:447c5b24-e035-4d61-a6dc-6b184c49bb3e,Namespace:calico-system,Attempt:0,}" May 13 23:45:07.987308 systemd-networkd[1332]: calida93b22efd9: Link UP May 13 23:45:07.988206 systemd-networkd[1332]: calida93b22efd9: Gained carrier May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.880 [INFO][5011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0 coredns-6f6b679f8f- kube-system cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54 710 0 2025-05-13 23:44:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c coredns-6f6b679f8f-k92sd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida93b22efd9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.880 [INFO][5011] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.922 [INFO][5034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" HandleID="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Workload="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.938 [INFO][5034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" HandleID="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Workload="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028ce20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-13ce75130c", "pod":"coredns-6f6b679f8f-k92sd", "timestamp":"2025-05-13 23:45:07.922752775 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.939 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.939 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.939 [INFO][5034] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.943 [INFO][5034] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.950 [INFO][5034] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.957 [INFO][5034] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.959 [INFO][5034] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.961 [INFO][5034] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.961 [INFO][5034] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.962 [INFO][5034] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5 May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.969 [INFO][5034] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.979 [INFO][5034] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.196/26] block=192.168.119.192/26 handle="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.979 [INFO][5034] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.196/26] handle="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.979 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:08.010435 containerd[1765]: 2025-05-13 23:45:07.979 [INFO][5034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.196/26] IPv6=[] ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" HandleID="k8s-pod-network.6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Workload="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" May 13 23:45:08.010931 containerd[1765]: 2025-05-13 23:45:07.983 [INFO][5011] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"coredns-6f6b679f8f-k92sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida93b22efd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:08.010931 containerd[1765]: 2025-05-13 23:45:07.983 [INFO][5011] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.196/32] ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" May 13 23:45:08.010931 containerd[1765]: 2025-05-13 23:45:07.983 [INFO][5011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida93b22efd9 ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" May 13 23:45:08.010931 containerd[1765]: 2025-05-13 23:45:07.988 [INFO][5011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" May 13 23:45:08.010931 containerd[1765]: 2025-05-13 23:45:07.989 [INFO][5011] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5", Pod:"coredns-6f6b679f8f-k92sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida93b22efd9", MAC:"26:2a:09:26:78:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:08.010931 containerd[1765]: 2025-05-13 23:45:08.007 [INFO][5011] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-k92sd" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-coredns--6f6b679f8f--k92sd-eth0" May 13 23:45:08.077514 containerd[1765]: time="2025-05-13T23:45:08.077047445Z" level=info msg="connecting to shim 6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5" address="unix:///run/containerd/s/9ecc465fc2bfa40981531d54a79bbd2e4f1aac88beb2fca5bbf37c040a42c603" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:08.102712 systemd-networkd[1332]: calib4ae646a43c: Link UP May 13 23:45:08.104325 systemd-networkd[1332]: calib4ae646a43c: Gained carrier May 13 23:45:08.117394 systemd[1]: Started cri-containerd-6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5.scope - libcontainer container 6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5. May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:07.899 [INFO][5019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0 calico-kube-controllers-7dcd59c5cd- calico-system 447c5b24-e035-4d61-a6dc-6b184c49bb3e 716 0 2025-05-13 23:44:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dcd59c5cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c calico-kube-controllers-7dcd59c5cd-jhn7j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib4ae646a43c [] []}} ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:07.899 [INFO][5019] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:07.935 [INFO][5039] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:07.952 [INFO][5039] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-13ce75130c", "pod":"calico-kube-controllers-7dcd59c5cd-jhn7j", "timestamp":"2025-05-13 23:45:07.935663518 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:07.952 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:07.979 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:07.979 [INFO][5039] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.043 [INFO][5039] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.048 [INFO][5039] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.057 [INFO][5039] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.069 [INFO][5039] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.071 [INFO][5039] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.072 [INFO][5039] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.073 [INFO][5039] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.079 [INFO][5039] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.094 [INFO][5039] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.197/26] block=192.168.119.192/26 handle="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.095 [INFO][5039] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.197/26] handle="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.095 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:08.126819 containerd[1765]: 2025-05-13 23:45:08.095 [INFO][5039] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.197/26] IPv6=[] ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:08.127920 containerd[1765]: 2025-05-13 23:45:08.098 [INFO][5019] cni-plugin/k8s.go 386: Populated endpoint ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0", GenerateName:"calico-kube-controllers-7dcd59c5cd-", Namespace:"calico-system", SelfLink:"", UID:"447c5b24-e035-4d61-a6dc-6b184c49bb3e", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dcd59c5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"calico-kube-controllers-7dcd59c5cd-jhn7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4ae646a43c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:08.127920 containerd[1765]: 2025-05-13 23:45:08.098 [INFO][5019] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.197/32] ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:08.127920 containerd[1765]: 2025-05-13 23:45:08.099 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4ae646a43c ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:08.127920 containerd[1765]: 2025-05-13 23:45:08.104 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:08.127920 containerd[1765]: 2025-05-13 23:45:08.104 [INFO][5019] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0", GenerateName:"calico-kube-controllers-7dcd59c5cd-", Namespace:"calico-system", SelfLink:"", UID:"447c5b24-e035-4d61-a6dc-6b184c49bb3e", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dcd59c5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f", Pod:"calico-kube-controllers-7dcd59c5cd-jhn7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4ae646a43c", MAC:"62:93:01:82:01:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:08.127920 containerd[1765]: 2025-05-13 23:45:08.123 [INFO][5019] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Namespace="calico-system" Pod="calico-kube-controllers-7dcd59c5cd-jhn7j" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:08.179365 containerd[1765]: time="2025-05-13T23:45:08.179325265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k92sd,Uid:cfdf6fc4-3589-4a87-a2a9-c9b41a4fad54,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5\"" May 13 23:45:08.183801 containerd[1765]: time="2025-05-13T23:45:08.183675752Z" level=info msg="CreateContainer within sandbox \"6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:45:08.202347 containerd[1765]: time="2025-05-13T23:45:08.202286745Z" level=info msg="connecting to shim 335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" address="unix:///run/containerd/s/49ad69ab75d61e794df50c4102eaff4a12414cf733195c7a72ead624cc8ca414" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:08.216537 systemd-networkd[1332]: cali87b07a9e57b: Gained IPv6LL May 13 23:45:08.233384 systemd[1]: Started cri-containerd-335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f.scope - libcontainer container 335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f. May 13 23:45:08.256970 containerd[1765]: time="2025-05-13T23:45:08.256816521Z" level=info msg="Container 4608396f5b500cb22d3fdec8a8ab9cedaaae659fa93a01bbf0595630cbd89da8: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:08.279898 containerd[1765]: time="2025-05-13T23:45:08.279854681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcd59c5cd-jhn7j,Uid:447c5b24-e035-4d61-a6dc-6b184c49bb3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\"" May 13 23:45:08.281116 containerd[1765]: time="2025-05-13T23:45:08.280976083Z" level=info msg="CreateContainer within sandbox \"6fe10166a5659b3ec456fe7cee594a0c68f674b1dd87a0d93773d186d4738fc5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4608396f5b500cb22d3fdec8a8ab9cedaaae659fa93a01bbf0595630cbd89da8\"" May 13 23:45:08.281497 containerd[1765]: time="2025-05-13T23:45:08.281384524Z" level=info msg="StartContainer for \"4608396f5b500cb22d3fdec8a8ab9cedaaae659fa93a01bbf0595630cbd89da8\"" May 13 23:45:08.283927 containerd[1765]: time="2025-05-13T23:45:08.283895648Z" level=info msg="connecting to shim 4608396f5b500cb22d3fdec8a8ab9cedaaae659fa93a01bbf0595630cbd89da8" address="unix:///run/containerd/s/9ecc465fc2bfa40981531d54a79bbd2e4f1aac88beb2fca5bbf37c040a42c603" protocol=ttrpc version=3 May 13 23:45:08.306388 systemd[1]: Started cri-containerd-4608396f5b500cb22d3fdec8a8ab9cedaaae659fa93a01bbf0595630cbd89da8.scope - libcontainer container 4608396f5b500cb22d3fdec8a8ab9cedaaae659fa93a01bbf0595630cbd89da8. May 13 23:45:08.339823 containerd[1765]: time="2025-05-13T23:45:08.339701106Z" level=info msg="StartContainer for \"4608396f5b500cb22d3fdec8a8ab9cedaaae659fa93a01bbf0595630cbd89da8\" returns successfully" May 13 23:45:08.728371 systemd-networkd[1332]: cali4be9b7d2edb: Gained IPv6LL May 13 23:45:09.031974 kubelet[3386]: I0513 23:45:09.030170 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-k92sd" podStartSLOduration=52.030152157 podStartE2EDuration="52.030152157s" podCreationTimestamp="2025-05-13 23:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:45:09.029805796 +0000 UTC m=+56.326742405" watchObservedRunningTime="2025-05-13 23:45:09.030152157 +0000 UTC m=+56.327088766" May 13 23:45:09.625606 systemd-networkd[1332]: calida93b22efd9: Gained IPv6LL May 13 23:45:09.816969 systemd-networkd[1332]: calib4ae646a43c: Gained IPv6LL May 13 23:45:09.967875 containerd[1765]: time="2025-05-13T23:45:09.967824601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqzwk,Uid:74844576-5d15-4959-aa52-fdc4c5736ba8,Namespace:calico-system,Attempt:0,}" May 13 23:45:09.968278 containerd[1765]: time="2025-05-13T23:45:09.968026362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-8p7bb,Uid:a6acda27-adaa-464d-9d6d-5b299eb91453,Namespace:calico-apiserver,Attempt:0,}" May 13 23:45:10.551598 containerd[1765]: time="2025-05-13T23:45:10.551238762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:10.557444 containerd[1765]: time="2025-05-13T23:45:10.557394973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 23:45:10.572325 containerd[1765]: time="2025-05-13T23:45:10.572275838Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:10.575808 containerd[1765]: time="2025-05-13T23:45:10.575035323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:10.576231 containerd[1765]: time="2025-05-13T23:45:10.576005404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 4.183896673s" May 13 23:45:10.576786 containerd[1765]: time="2025-05-13T23:45:10.576763686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:45:10.588576 containerd[1765]: time="2025-05-13T23:45:10.588059625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:45:10.602941 containerd[1765]: time="2025-05-13T23:45:10.602895970Z" level=info msg="CreateContainer within sandbox \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:45:10.631639 systemd-networkd[1332]: cali1e5b888e3b3: Link UP May 13 23:45:10.632409 systemd-networkd[1332]: cali1e5b888e3b3: Gained carrier May 13 23:45:10.646668 containerd[1765]: time="2025-05-13T23:45:10.645427322Z" level=info msg="Container 4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.526 [INFO][5217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0 calico-apiserver-6c659f749- calico-apiserver a6acda27-adaa-464d-9d6d-5b299eb91453 713 0 2025-05-13 23:44:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c659f749 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c calico-apiserver-6c659f749-8p7bb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e5b888e3b3 [] []}} ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.526 [INFO][5217] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.566 [INFO][5240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.579 [INFO][5240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000382ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-13ce75130c", "pod":"calico-apiserver-6c659f749-8p7bb", "timestamp":"2025-05-13 23:45:10.566144268 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.579 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.580 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.580 [INFO][5240] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.582 [INFO][5240] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.591 [INFO][5240] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.599 [INFO][5240] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.602 [INFO][5240] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.606 [INFO][5240] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.606 [INFO][5240] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.608 [INFO][5240] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13 May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.613 [INFO][5240] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.624 [INFO][5240] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.198/26] block=192.168.119.192/26 handle="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.624 [INFO][5240] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.198/26] handle="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.624 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:10.659897 containerd[1765]: 2025-05-13 23:45:10.624 [INFO][5240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.198/26] IPv6=[] ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:10.660488 containerd[1765]: 2025-05-13 23:45:10.627 [INFO][5217] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0", GenerateName:"calico-apiserver-6c659f749-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6acda27-adaa-464d-9d6d-5b299eb91453", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c659f749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"calico-apiserver-6c659f749-8p7bb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e5b888e3b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:10.660488 containerd[1765]: 2025-05-13 23:45:10.628 [INFO][5217] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.198/32] ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:10.660488 containerd[1765]: 2025-05-13 23:45:10.628 [INFO][5217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e5b888e3b3 ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:10.660488 containerd[1765]: 2025-05-13 23:45:10.632 [INFO][5217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:10.660488 containerd[1765]: 2025-05-13 23:45:10.633 [INFO][5217] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0", GenerateName:"calico-apiserver-6c659f749-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6acda27-adaa-464d-9d6d-5b299eb91453", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c659f749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13", Pod:"calico-apiserver-6c659f749-8p7bb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e5b888e3b3", MAC:"62:d7:e7:b3:90:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:10.660488 containerd[1765]: 2025-05-13 23:45:10.651 [INFO][5217] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Namespace="calico-apiserver" Pod="calico-apiserver-6c659f749-8p7bb" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:10.687992 containerd[1765]: time="2025-05-13T23:45:10.687337153Z" level=info msg="CreateContainer within sandbox \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\"" May 13 23:45:10.688867 containerd[1765]: time="2025-05-13T23:45:10.688379914Z" level=info msg="StartContainer for \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\"" May 13 23:45:10.692033 containerd[1765]: time="2025-05-13T23:45:10.691837400Z" level=info msg="connecting to shim 4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488" address="unix:///run/containerd/s/1a4bd4c00b37cbfc7d40ef9cd0576ea9feb87bc13cc79ceaa41d4c34b5d6fb2c" protocol=ttrpc version=3 May 13 23:45:10.729384 systemd[1]: Started cri-containerd-4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488.scope - libcontainer container 4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488. May 13 23:45:10.742446 systemd-networkd[1332]: cali06780b2a8e0: Link UP May 13 23:45:10.744527 systemd-networkd[1332]: cali06780b2a8e0: Gained carrier May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.522 [INFO][5208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0 csi-node-driver- calico-system 74844576-5d15-4959-aa52-fdc4c5736ba8 589 0 2025-05-13 23:44:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c csi-node-driver-kqzwk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali06780b2a8e0 [] []}} ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.523 [INFO][5208] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.567 [INFO][5238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" HandleID="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Workload="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.589 [INFO][5238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" HandleID="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Workload="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-13ce75130c", "pod":"csi-node-driver-kqzwk", "timestamp":"2025-05-13 23:45:10.56734331 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.590 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.625 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.625 [INFO][5238] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.682 [INFO][5238] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.690 [INFO][5238] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.701 [INFO][5238] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.704 [INFO][5238] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.708 [INFO][5238] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.708 [INFO][5238] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.711 [INFO][5238] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.718 [INFO][5238] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.732 [INFO][5238] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.199/26] block=192.168.119.192/26 handle="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.732 [INFO][5238] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.199/26] handle="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.733 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:10.799321 containerd[1765]: 2025-05-13 23:45:10.733 [INFO][5238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.199/26] IPv6=[] ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" HandleID="k8s-pod-network.9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Workload="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" May 13 23:45:10.800070 containerd[1765]: 2025-05-13 23:45:10.735 [INFO][5208] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74844576-5d15-4959-aa52-fdc4c5736ba8", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"csi-node-driver-kqzwk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06780b2a8e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:10.800070 containerd[1765]: 2025-05-13 23:45:10.738 [INFO][5208] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.199/32] ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" May 13 23:45:10.800070 containerd[1765]: 2025-05-13 23:45:10.738 [INFO][5208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06780b2a8e0 ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" May 13 23:45:10.800070 containerd[1765]: 2025-05-13 23:45:10.743 [INFO][5208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" May 13 23:45:10.800070 containerd[1765]: 2025-05-13 23:45:10.744 [INFO][5208] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74844576-5d15-4959-aa52-fdc4c5736ba8", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a", Pod:"csi-node-driver-kqzwk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06780b2a8e0", MAC:"2e:19:1c:9a:4f:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:10.800070 containerd[1765]: 2025-05-13 23:45:10.761 [INFO][5208] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" Namespace="calico-system" Pod="csi-node-driver-kqzwk" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-csi--node--driver--kqzwk-eth0" May 13 23:45:10.819493 containerd[1765]: time="2025-05-13T23:45:10.818881335Z" level=info msg="connecting to shim 9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" address="unix:///run/containerd/s/14eb14c728c52d5e78802e800c58cbe8f818c6985f7eca970534581140d416df" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:10.854368 systemd[1]: Started cri-containerd-9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13.scope - libcontainer container 9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13. May 13 23:45:10.870360 containerd[1765]: time="2025-05-13T23:45:10.870260662Z" level=info msg="StartContainer for \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" returns successfully" May 13 23:45:10.910314 containerd[1765]: time="2025-05-13T23:45:10.910237210Z" level=info msg="connecting to shim 9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a" address="unix:///run/containerd/s/93dd30ce1d1a13d22688884bc1f743ecfa2a601303a7ccd479788d689f984b32" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:10.924267 containerd[1765]: time="2025-05-13T23:45:10.924232633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c659f749-8p7bb,Uid:a6acda27-adaa-464d-9d6d-5b299eb91453,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\"" May 13 23:45:10.930906 containerd[1765]: time="2025-05-13T23:45:10.930710924Z" level=info msg="CreateContainer within sandbox \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:45:10.960461 systemd[1]: Started cri-containerd-9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a.scope - libcontainer container 9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a. May 13 23:45:10.978185 containerd[1765]: time="2025-05-13T23:45:10.978050524Z" level=info msg="Container 9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:11.028632 containerd[1765]: time="2025-05-13T23:45:11.027658688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqzwk,Uid:74844576-5d15-4959-aa52-fdc4c5736ba8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a\"" May 13 23:45:11.043144 containerd[1765]: time="2025-05-13T23:45:11.042480433Z" level=info msg="CreateContainer within sandbox \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\"" May 13 23:45:11.045266 containerd[1765]: time="2025-05-13T23:45:11.045242598Z" level=info msg="StartContainer for \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\"" May 13 23:45:11.047604 containerd[1765]: time="2025-05-13T23:45:11.046449680Z" level=info msg="connecting to shim 9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444" address="unix:///run/containerd/s/14eb14c728c52d5e78802e800c58cbe8f818c6985f7eca970534581140d416df" protocol=ttrpc version=3 May 13 23:45:11.070362 systemd[1]: Started cri-containerd-9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444.scope - libcontainer container 9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444. May 13 23:45:11.070974 containerd[1765]: time="2025-05-13T23:45:11.070940601Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:11.078535 containerd[1765]: time="2025-05-13T23:45:11.078480974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 23:45:11.081224 containerd[1765]: time="2025-05-13T23:45:11.080158617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 492.058192ms" May 13 23:45:11.081361 containerd[1765]: time="2025-05-13T23:45:11.081342579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:45:11.084276 containerd[1765]: time="2025-05-13T23:45:11.084232784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:45:11.085202 containerd[1765]: time="2025-05-13T23:45:11.085177985Z" level=info msg="CreateContainer within sandbox \"99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:45:11.130522 containerd[1765]: time="2025-05-13T23:45:11.130483462Z" level=info msg="Container 9b53bf16f9676744531a3acc11521fcb4ac256f3dddc95ade964cdf74f0bbc53: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:11.134121 containerd[1765]: time="2025-05-13T23:45:11.134095388Z" level=info msg="StartContainer for \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" returns successfully" May 13 23:45:11.158562 containerd[1765]: time="2025-05-13T23:45:11.158384309Z" level=info msg="CreateContainer within sandbox \"99b9e42b8c4b3605b4562fa5c416a5b4118fa1bd775c1724bc7010e65af255a7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9b53bf16f9676744531a3acc11521fcb4ac256f3dddc95ade964cdf74f0bbc53\"" May 13 23:45:11.160477 containerd[1765]: time="2025-05-13T23:45:11.160400313Z" level=info msg="StartContainer for \"9b53bf16f9676744531a3acc11521fcb4ac256f3dddc95ade964cdf74f0bbc53\"" May 13 23:45:11.161807 containerd[1765]: time="2025-05-13T23:45:11.161766475Z" level=info msg="connecting to shim 9b53bf16f9676744531a3acc11521fcb4ac256f3dddc95ade964cdf74f0bbc53" address="unix:///run/containerd/s/6c41283aba64d008fabee3a063062f9655d5eb7fbff2df6650b3446c4b4dd751" protocol=ttrpc version=3 May 13 23:45:11.196498 systemd[1]: Started cri-containerd-9b53bf16f9676744531a3acc11521fcb4ac256f3dddc95ade964cdf74f0bbc53.scope - libcontainer container 9b53bf16f9676744531a3acc11521fcb4ac256f3dddc95ade964cdf74f0bbc53. May 13 23:45:11.290265 containerd[1765]: time="2025-05-13T23:45:11.290143652Z" level=info msg="StartContainer for \"9b53bf16f9676744531a3acc11521fcb4ac256f3dddc95ade964cdf74f0bbc53\" returns successfully" May 13 23:45:11.864369 systemd-networkd[1332]: cali06780b2a8e0: Gained IPv6LL May 13 23:45:12.040491 kubelet[3386]: I0513 23:45:12.040423 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c659f749-ltpvp" podStartSLOduration=40.844589507 podStartE2EDuration="45.040405561s" podCreationTimestamp="2025-05-13 23:44:27 +0000 UTC" firstStartedPulling="2025-05-13 23:45:06.39172713 +0000 UTC m=+53.688663739" lastFinishedPulling="2025-05-13 23:45:10.587543144 +0000 UTC m=+57.884479793" observedRunningTime="2025-05-13 23:45:11.035262461 +0000 UTC m=+58.332199110" watchObservedRunningTime="2025-05-13 23:45:12.040405561 +0000 UTC m=+59.337342170" May 13 23:45:12.060287 kubelet[3386]: I0513 23:45:12.060228 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c659f749-8p7bb" podStartSLOduration=45.060190914 podStartE2EDuration="45.060190914s" podCreationTimestamp="2025-05-13 23:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:45:12.041076482 +0000 UTC m=+59.338013091" watchObservedRunningTime="2025-05-13 23:45:12.060190914 +0000 UTC m=+59.357127523" May 13 23:45:12.697381 systemd-networkd[1332]: cali1e5b888e3b3: Gained IPv6LL May 13 23:45:12.709121 kubelet[3386]: I0513 23:45:12.708261 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77478f65f8-x25bh" podStartSLOduration=39.773828365 podStartE2EDuration="43.70824253s" podCreationTimestamp="2025-05-13 23:44:29 +0000 UTC" firstStartedPulling="2025-05-13 23:45:07.147938736 +0000 UTC m=+54.444875345" lastFinishedPulling="2025-05-13 23:45:11.082352901 +0000 UTC m=+58.379289510" observedRunningTime="2025-05-13 23:45:12.060914956 +0000 UTC m=+59.357851565" watchObservedRunningTime="2025-05-13 23:45:12.70824253 +0000 UTC m=+60.005179139" May 13 23:45:13.384553 systemd[1]: Created slice kubepods-besteffort-pod49f71e64_9a41_4dd8_b3fc_1171689bb3bf.slice - libcontainer container kubepods-besteffort-pod49f71e64_9a41_4dd8_b3fc_1171689bb3bf.slice. May 13 23:45:13.492553 kubelet[3386]: I0513 23:45:13.492405 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49f71e64-9a41-4dd8-b3fc-1171689bb3bf-calico-apiserver-certs\") pod \"calico-apiserver-77478f65f8-hm5v7\" (UID: \"49f71e64-9a41-4dd8-b3fc-1171689bb3bf\") " pod="calico-apiserver/calico-apiserver-77478f65f8-hm5v7" May 13 23:45:13.492553 kubelet[3386]: I0513 23:45:13.492453 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm66p\" (UniqueName: \"kubernetes.io/projected/49f71e64-9a41-4dd8-b3fc-1171689bb3bf-kube-api-access-tm66p\") pod \"calico-apiserver-77478f65f8-hm5v7\" (UID: \"49f71e64-9a41-4dd8-b3fc-1171689bb3bf\") " pod="calico-apiserver/calico-apiserver-77478f65f8-hm5v7" May 13 23:45:13.691349 containerd[1765]: time="2025-05-13T23:45:13.691200153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77478f65f8-hm5v7,Uid:49f71e64-9a41-4dd8-b3fc-1171689bb3bf,Namespace:calico-apiserver,Attempt:0,}" May 13 23:45:14.600673 systemd-networkd[1332]: cali3e3c73cb156: Link UP May 13 23:45:14.602121 systemd-networkd[1332]: cali3e3c73cb156: Gained carrier May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.505 [INFO][5487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0 calico-apiserver-77478f65f8- calico-apiserver 49f71e64-9a41-4dd8-b3fc-1171689bb3bf 900 0 2025-05-13 23:45:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77478f65f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c calico-apiserver-77478f65f8-hm5v7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3e3c73cb156 [] []}} ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.505 [INFO][5487] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.536 [INFO][5499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" HandleID="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.557 [INFO][5499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" HandleID="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-13ce75130c", "pod":"calico-apiserver-77478f65f8-hm5v7", "timestamp":"2025-05-13 23:45:14.536564422 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.557 [INFO][5499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.557 [INFO][5499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.557 [INFO][5499] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.559 [INFO][5499] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.564 [INFO][5499] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.569 [INFO][5499] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.571 [INFO][5499] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.574 [INFO][5499] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.574 [INFO][5499] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.577 [INFO][5499] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.582 [INFO][5499] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.594 [INFO][5499] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.200/26] block=192.168.119.192/26 handle="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.595 [INFO][5499] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.200/26] handle="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.595 [INFO][5499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:14.629522 containerd[1765]: 2025-05-13 23:45:14.595 [INFO][5499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.200/26] IPv6=[] ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" HandleID="k8s-pod-network.8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" May 13 23:45:14.630042 containerd[1765]: 2025-05-13 23:45:14.596 [INFO][5487] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0", GenerateName:"calico-apiserver-77478f65f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"49f71e64-9a41-4dd8-b3fc-1171689bb3bf", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77478f65f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"calico-apiserver-77478f65f8-hm5v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e3c73cb156", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:14.630042 containerd[1765]: 2025-05-13 23:45:14.596 [INFO][5487] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.200/32] ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" May 13 23:45:14.630042 containerd[1765]: 2025-05-13 23:45:14.597 [INFO][5487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e3c73cb156 ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" May 13 23:45:14.630042 containerd[1765]: 2025-05-13 23:45:14.603 [INFO][5487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" May 13 23:45:14.630042 containerd[1765]: 2025-05-13 23:45:14.603 [INFO][5487] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0", GenerateName:"calico-apiserver-77478f65f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"49f71e64-9a41-4dd8-b3fc-1171689bb3bf", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77478f65f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a", Pod:"calico-apiserver-77478f65f8-hm5v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e3c73cb156", MAC:"f6:d1:cb:1c:f8:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:14.630042 containerd[1765]: 2025-05-13 23:45:14.625 [INFO][5487] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" Namespace="calico-apiserver" Pod="calico-apiserver-77478f65f8-hm5v7" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--77478f65f8--hm5v7-eth0" May 13 23:45:15.014926 containerd[1765]: time="2025-05-13T23:45:15.014886871Z" level=info msg="connecting to shim 8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a" address="unix:///run/containerd/s/2c323158da80b1b4e8b7c139bf6411138d88990086ee00d9aafe146278e8c853" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:15.035364 systemd[1]: Started cri-containerd-8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a.scope - libcontainer container 8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a. May 13 23:45:15.151378 systemd[1]: cri-containerd-9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444.scope: Deactivated successfully. May 13 23:45:15.618678 containerd[1765]: time="2025-05-13T23:45:15.038546391Z" level=info msg="StopContainer for \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" with timeout 30 (s)" May 13 23:45:15.618678 containerd[1765]: time="2025-05-13T23:45:15.051433573Z" level=info msg="Stop container \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" with signal terminated" May 13 23:45:15.618678 containerd[1765]: time="2025-05-13T23:45:15.153514706Z" level=info msg="received exit event container_id:\"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" id:\"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" pid:5415 exit_status:1 exited_at:{seconds:1747179915 nanos:153203825}" May 13 23:45:15.618678 containerd[1765]: time="2025-05-13T23:45:15.153615826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" id:\"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" pid:5415 exit_status:1 exited_at:{seconds:1747179915 nanos:153203825}" May 13 23:45:15.151849 systemd[1]: cri-containerd-9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444.scope: Consumed 1.265s CPU time, 49.8M memory peak. May 13 23:45:15.193393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444-rootfs.mount: Deactivated successfully. May 13 23:45:16.280449 systemd-networkd[1332]: cali3e3c73cb156: Gained IPv6LL May 13 23:45:20.679988 containerd[1765]: time="2025-05-13T23:45:20.679638036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77478f65f8-hm5v7,Uid:49f71e64-9a41-4dd8-b3fc-1171689bb3bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a\"" May 13 23:45:20.682013 containerd[1765]: time="2025-05-13T23:45:20.681066278Z" level=info msg="StopContainer for \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" returns successfully" May 13 23:45:20.682013 containerd[1765]: time="2025-05-13T23:45:20.681574639Z" level=info msg="StopPodSandbox for \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\"" May 13 23:45:20.682013 containerd[1765]: time="2025-05-13T23:45:20.681627719Z" level=info msg="Container to stop \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:45:20.684542 containerd[1765]: time="2025-05-13T23:45:20.684501404Z" level=info msg="CreateContainer within sandbox \"8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:45:20.692431 systemd[1]: cri-containerd-9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13.scope: Deactivated successfully. May 13 23:45:20.696059 containerd[1765]: time="2025-05-13T23:45:20.695731304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" id:\"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" pid:5336 exit_status:137 exited_at:{seconds:1747179920 nanos:695364503}" May 13 23:45:20.719309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13-rootfs.mount: Deactivated successfully. May 13 23:45:21.370186 containerd[1765]: time="2025-05-13T23:45:21.370146135Z" level=info msg="shim disconnected" id=9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13 namespace=k8s.io May 13 23:45:21.370513 containerd[1765]: time="2025-05-13T23:45:21.370263055Z" level=warning msg="cleaning up after shim disconnected" id=9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13 namespace=k8s.io May 13 23:45:21.370513 containerd[1765]: time="2025-05-13T23:45:21.370297935Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:45:21.386229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13-shm.mount: Deactivated successfully. May 13 23:45:21.392020 containerd[1765]: time="2025-05-13T23:45:21.391789533Z" level=info msg="received exit event sandbox_id:\"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" exit_status:137 exited_at:{seconds:1747179920 nanos:695364503}" May 13 23:45:21.419399 containerd[1765]: time="2025-05-13T23:45:21.418917541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:21.422829 containerd[1765]: time="2025-05-13T23:45:21.422748708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 23:45:21.439573 systemd-networkd[1332]: cali1e5b888e3b3: Link DOWN May 13 23:45:21.439579 systemd-networkd[1332]: cali1e5b888e3b3: Lost carrier May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.437 [INFO][5653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.437 [INFO][5653] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" iface="eth0" netns="/var/run/netns/cni-14a90547-6001-0a78-a58d-ad258205bd69" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.438 [INFO][5653] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" iface="eth0" netns="/var/run/netns/cni-14a90547-6001-0a78-a58d-ad258205bd69" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.444 [INFO][5653] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" after=6.817412ms iface="eth0" netns="/var/run/netns/cni-14a90547-6001-0a78-a58d-ad258205bd69" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.444 [INFO][5653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.444 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.462 [INFO][5663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.462 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.462 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.506 [INFO][5663] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.506 [INFO][5663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.508 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:21.511355 containerd[1765]: 2025-05-13 23:45:21.509 [INFO][5653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:45:21.514915 containerd[1765]: time="2025-05-13T23:45:21.512004865Z" level=info msg="TearDown network for sandbox \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" successfully" May 13 23:45:21.514915 containerd[1765]: time="2025-05-13T23:45:21.512309706Z" level=info msg="StopPodSandbox for \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" returns successfully" May 13 23:45:21.515649 systemd[1]: run-netns-cni\x2d14a90547\x2d6001\x2d0a78\x2da58d\x2dad258205bd69.mount: Deactivated successfully. May 13 23:45:21.531174 containerd[1765]: time="2025-05-13T23:45:21.531110899Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:21.577135 containerd[1765]: time="2025-05-13T23:45:21.576993660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:21.578444 containerd[1765]: time="2025-05-13T23:45:21.578222902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 10.493656998s" May 13 23:45:21.578444 containerd[1765]: time="2025-05-13T23:45:21.578258942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 23:45:21.580442 containerd[1765]: time="2025-05-13T23:45:21.580290946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:45:21.592076 containerd[1765]: time="2025-05-13T23:45:21.591702206Z" level=info msg="CreateContainer within sandbox \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:45:21.638931 kubelet[3386]: I0513 23:45:21.638828 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a6acda27-adaa-464d-9d6d-5b299eb91453-calico-apiserver-certs\") pod \"a6acda27-adaa-464d-9d6d-5b299eb91453\" (UID: \"a6acda27-adaa-464d-9d6d-5b299eb91453\") " May 13 23:45:21.638931 kubelet[3386]: I0513 23:45:21.638872 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pljzj\" (UniqueName: \"kubernetes.io/projected/a6acda27-adaa-464d-9d6d-5b299eb91453-kube-api-access-pljzj\") pod \"a6acda27-adaa-464d-9d6d-5b299eb91453\" (UID: \"a6acda27-adaa-464d-9d6d-5b299eb91453\") " May 13 23:45:21.641651 kubelet[3386]: I0513 23:45:21.641593 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6acda27-adaa-464d-9d6d-5b299eb91453-kube-api-access-pljzj" (OuterVolumeSpecName: "kube-api-access-pljzj") pod "a6acda27-adaa-464d-9d6d-5b299eb91453" (UID: "a6acda27-adaa-464d-9d6d-5b299eb91453"). InnerVolumeSpecName "kube-api-access-pljzj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:45:21.643450 kubelet[3386]: I0513 23:45:21.643412 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6acda27-adaa-464d-9d6d-5b299eb91453-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a6acda27-adaa-464d-9d6d-5b299eb91453" (UID: "a6acda27-adaa-464d-9d6d-5b299eb91453"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:45:21.672254 containerd[1765]: time="2025-05-13T23:45:21.671853588Z" level=info msg="Container 086e7edf418259a3255250e2db205c158eb2748b08b0e387c117253d9959b16e: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:21.718820 systemd[1]: var-lib-kubelet-pods-a6acda27\x2dadaa\x2d464d\x2d9d6d\x2d5b299eb91453-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpljzj.mount: Deactivated successfully. May 13 23:45:21.719038 systemd[1]: var-lib-kubelet-pods-a6acda27\x2dadaa\x2d464d\x2d9d6d\x2d5b299eb91453-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 23:45:21.739823 kubelet[3386]: I0513 23:45:21.739760 3386 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pljzj\" (UniqueName: \"kubernetes.io/projected/a6acda27-adaa-464d-9d6d-5b299eb91453-kube-api-access-pljzj\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:21.739823 kubelet[3386]: I0513 23:45:21.739795 3386 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a6acda27-adaa-464d-9d6d-5b299eb91453-calico-apiserver-certs\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:21.976467 containerd[1765]: time="2025-05-13T23:45:21.975031643Z" level=info msg="CreateContainer within sandbox \"8506acee024a54afa78baab92a51078a15ee0b22927b87f279afb0a1f285d97a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"086e7edf418259a3255250e2db205c158eb2748b08b0e387c117253d9959b16e\"" May 13 23:45:21.978617 containerd[1765]: time="2025-05-13T23:45:21.977542287Z" level=info msg="StartContainer for \"086e7edf418259a3255250e2db205c158eb2748b08b0e387c117253d9959b16e\"" May 13 23:45:21.981292 containerd[1765]: time="2025-05-13T23:45:21.978759610Z" level=info msg="Container 426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:21.981292 containerd[1765]: time="2025-05-13T23:45:21.979619491Z" level=info msg="connecting to shim 086e7edf418259a3255250e2db205c158eb2748b08b0e387c117253d9959b16e" address="unix:///run/containerd/s/2c323158da80b1b4e8b7c139bf6411138d88990086ee00d9aafe146278e8c853" protocol=ttrpc version=3 May 13 23:45:21.984618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659008830.mount: Deactivated successfully. May 13 23:45:22.010366 systemd[1]: Started cri-containerd-086e7edf418259a3255250e2db205c158eb2748b08b0e387c117253d9959b16e.scope - libcontainer container 086e7edf418259a3255250e2db205c158eb2748b08b0e387c117253d9959b16e. May 13 23:45:22.089230 containerd[1765]: time="2025-05-13T23:45:22.089091244Z" level=info msg="StartContainer for \"086e7edf418259a3255250e2db205c158eb2748b08b0e387c117253d9959b16e\" returns successfully" May 13 23:45:22.096715 kubelet[3386]: I0513 23:45:22.096183 3386 scope.go:117] "RemoveContainer" containerID="9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444" May 13 23:45:22.099195 systemd[1]: Removed slice kubepods-besteffort-poda6acda27_adaa_464d_9d6d_5b299eb91453.slice - libcontainer container kubepods-besteffort-poda6acda27_adaa_464d_9d6d_5b299eb91453.slice. May 13 23:45:22.101388 containerd[1765]: time="2025-05-13T23:45:22.100139104Z" level=info msg="RemoveContainer for \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\"" May 13 23:45:22.099438 systemd[1]: kubepods-besteffort-poda6acda27_adaa_464d_9d6d_5b299eb91453.slice: Consumed 1.283s CPU time, 50M memory peak. May 13 23:45:22.316973 containerd[1765]: time="2025-05-13T23:45:22.316858847Z" level=info msg="RemoveContainer for \"9eddec4c84d751c2b5f675bdb9f3fcd43e9f855ad0da845320982a2a982d7444\" returns successfully" May 13 23:45:22.428898 containerd[1765]: time="2025-05-13T23:45:22.428760044Z" level=info msg="CreateContainer within sandbox \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\"" May 13 23:45:22.430784 containerd[1765]: time="2025-05-13T23:45:22.430517407Z" level=info msg="StartContainer for \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\"" May 13 23:45:22.433368 containerd[1765]: time="2025-05-13T23:45:22.433176772Z" level=info msg="connecting to shim 426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78" address="unix:///run/containerd/s/49ad69ab75d61e794df50c4102eaff4a12414cf733195c7a72ead624cc8ca414" protocol=ttrpc version=3 May 13 23:45:22.458380 systemd[1]: Started cri-containerd-426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78.scope - libcontainer container 426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78. May 13 23:45:22.539521 containerd[1765]: time="2025-05-13T23:45:22.539477240Z" level=info msg="StartContainer for \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" returns successfully" May 13 23:45:22.821481 kubelet[3386]: I0513 23:45:22.821040 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6acda27-adaa-464d-9d6d-5b299eb91453" path="/var/lib/kubelet/pods/a6acda27-adaa-464d-9d6d-5b299eb91453/volumes" May 13 23:45:23.121689 kubelet[3386]: I0513 23:45:23.120642 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dcd59c5cd-jhn7j" podStartSLOduration=41.822835806 podStartE2EDuration="55.120624866s" podCreationTimestamp="2025-05-13 23:44:28 +0000 UTC" firstStartedPulling="2025-05-13 23:45:08.281820885 +0000 UTC m=+55.578757494" lastFinishedPulling="2025-05-13 23:45:21.579609985 +0000 UTC m=+68.876546554" observedRunningTime="2025-05-13 23:45:23.119992305 +0000 UTC m=+70.416928874" watchObservedRunningTime="2025-05-13 23:45:23.120624866 +0000 UTC m=+70.417561475" May 13 23:45:23.134694 kubelet[3386]: I0513 23:45:23.134627 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77478f65f8-hm5v7" podStartSLOduration=10.134609731 podStartE2EDuration="10.134609731s" podCreationTimestamp="2025-05-13 23:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:45:23.13436021 +0000 UTC m=+70.431296819" watchObservedRunningTime="2025-05-13 23:45:23.134609731 +0000 UTC m=+70.431546300" May 13 23:45:23.153804 containerd[1765]: time="2025-05-13T23:45:23.153752125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" id:\"d7a0b556a51795e0fde526a34ab7c8d5b233af10bdbd0cb287163d84fc17f08f\" pid:5762 exited_at:{seconds:1747179923 nanos:153306204}" May 13 23:45:23.224486 containerd[1765]: time="2025-05-13T23:45:23.223441208Z" level=info msg="StopContainer for \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" with timeout 30 (s)" May 13 23:45:23.224911 containerd[1765]: time="2025-05-13T23:45:23.224880650Z" level=info msg="Stop container \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" with signal terminated" May 13 23:45:23.255800 systemd[1]: cri-containerd-4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488.scope: Deactivated successfully. May 13 23:45:23.258032 containerd[1765]: time="2025-05-13T23:45:23.257877268Z" level=info msg="received exit event container_id:\"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" id:\"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" pid:5277 exit_status:1 exited_at:{seconds:1747179923 nanos:257057347}" May 13 23:45:23.258933 containerd[1765]: time="2025-05-13T23:45:23.258866030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" id:\"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" pid:5277 exit_status:1 exited_at:{seconds:1747179923 nanos:257057347}" May 13 23:45:23.284002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488-rootfs.mount: Deactivated successfully. May 13 23:45:25.226838 containerd[1765]: time="2025-05-13T23:45:25.226794625Z" level=info msg="StopContainer for \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" returns successfully" May 13 23:45:25.229280 containerd[1765]: time="2025-05-13T23:45:25.228322388Z" level=info msg="StopPodSandbox for \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\"" May 13 23:45:25.229280 containerd[1765]: time="2025-05-13T23:45:25.228387748Z" level=info msg="Container to stop \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:45:25.238508 systemd[1]: cri-containerd-3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117.scope: Deactivated successfully. May 13 23:45:25.243165 containerd[1765]: time="2025-05-13T23:45:25.242616173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" id:\"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" pid:4877 exit_status:137 exited_at:{seconds:1747179925 nanos:241058051}" May 13 23:45:25.286391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117-rootfs.mount: Deactivated successfully. May 13 23:45:25.916589 containerd[1765]: time="2025-05-13T23:45:25.916541643Z" level=info msg="shim disconnected" id=3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117 namespace=k8s.io May 13 23:45:25.916741 containerd[1765]: time="2025-05-13T23:45:25.916605243Z" level=warning msg="cleaning up after shim disconnected" id=3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117 namespace=k8s.io May 13 23:45:25.916741 containerd[1765]: time="2025-05-13T23:45:25.916639364Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:45:25.930640 containerd[1765]: time="2025-05-13T23:45:25.929053545Z" level=info msg="received exit event sandbox_id:\"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" exit_status:137 exited_at:{seconds:1747179925 nanos:241058051}" May 13 23:45:25.932761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117-shm.mount: Deactivated successfully. May 13 23:45:25.982721 systemd-networkd[1332]: cali87b07a9e57b: Link DOWN May 13 23:45:25.982728 systemd-networkd[1332]: cali87b07a9e57b: Lost carrier May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:25.979 [INFO][5849] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:25.981 [INFO][5849] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" iface="eth0" netns="/var/run/netns/cni-af5f6828-c454-8e5f-96f0-38d148110310" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:25.981 [INFO][5849] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" iface="eth0" netns="/var/run/netns/cni-af5f6828-c454-8e5f-96f0-38d148110310" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:25.992 [INFO][5849] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" after=11.084339ms iface="eth0" netns="/var/run/netns/cni-af5f6828-c454-8e5f-96f0-38d148110310" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:25.992 [INFO][5849] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:25.992 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:26.032 [INFO][5859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:26.032 [INFO][5859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:26.032 [INFO][5859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:26.082 [INFO][5859] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:26.083 [INFO][5859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:26.084 [INFO][5859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:26.088645 containerd[1765]: 2025-05-13 23:45:26.086 [INFO][5849] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:45:26.091403 containerd[1765]: time="2025-05-13T23:45:26.091329992Z" level=info msg="TearDown network for sandbox \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" successfully" May 13 23:45:26.091403 containerd[1765]: time="2025-05-13T23:45:26.091362432Z" level=info msg="StopPodSandbox for \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" returns successfully" May 13 23:45:26.092082 systemd[1]: run-netns-cni\x2daf5f6828\x2dc454\x2d8e5f\x2d96f0\x2d38d148110310.mount: Deactivated successfully. May 13 23:45:26.108876 kubelet[3386]: I0513 23:45:26.108825 3386 scope.go:117] "RemoveContainer" containerID="4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488" May 13 23:45:26.111241 containerd[1765]: time="2025-05-13T23:45:26.110791226Z" level=info msg="RemoveContainer for \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\"" May 13 23:45:26.232571 containerd[1765]: time="2025-05-13T23:45:26.232377720Z" level=info msg="RemoveContainer for \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" returns successfully" May 13 23:45:26.233417 kubelet[3386]: I0513 23:45:26.233033 3386 scope.go:117] "RemoveContainer" containerID="4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488" May 13 23:45:26.233736 containerd[1765]: time="2025-05-13T23:45:26.233546762Z" level=error msg="ContainerStatus for \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\": not found" May 13 23:45:26.236606 kubelet[3386]: E0513 23:45:26.236473 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\": not found" containerID="4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488" May 13 23:45:26.236953 kubelet[3386]: I0513 23:45:26.236526 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488"} err="failed to get container status \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e1945315d05e94b60ce897f9c5a0385d9d07244b6968aa95719d2b2dcacb488\": not found" May 13 23:45:26.267840 kubelet[3386]: I0513 23:45:26.267433 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp5lt\" (UniqueName: \"kubernetes.io/projected/8b109c94-4423-43d5-babf-1f349408342e-kube-api-access-tp5lt\") pod \"8b109c94-4423-43d5-babf-1f349408342e\" (UID: \"8b109c94-4423-43d5-babf-1f349408342e\") " May 13 23:45:26.267840 kubelet[3386]: I0513 23:45:26.267485 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b109c94-4423-43d5-babf-1f349408342e-calico-apiserver-certs\") pod \"8b109c94-4423-43d5-babf-1f349408342e\" (UID: \"8b109c94-4423-43d5-babf-1f349408342e\") " May 13 23:45:26.271952 systemd[1]: var-lib-kubelet-pods-8b109c94\x2d4423\x2d43d5\x2dbabf\x2d1f349408342e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 23:45:26.274839 kubelet[3386]: I0513 23:45:26.274806 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b109c94-4423-43d5-babf-1f349408342e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "8b109c94-4423-43d5-babf-1f349408342e" (UID: "8b109c94-4423-43d5-babf-1f349408342e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:45:26.275403 kubelet[3386]: I0513 23:45:26.275368 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b109c94-4423-43d5-babf-1f349408342e-kube-api-access-tp5lt" (OuterVolumeSpecName: "kube-api-access-tp5lt") pod "8b109c94-4423-43d5-babf-1f349408342e" (UID: "8b109c94-4423-43d5-babf-1f349408342e"). InnerVolumeSpecName "kube-api-access-tp5lt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:45:26.275788 systemd[1]: var-lib-kubelet-pods-8b109c94\x2d4423\x2d43d5\x2dbabf\x2d1f349408342e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtp5lt.mount: Deactivated successfully. May 13 23:45:26.368143 kubelet[3386]: I0513 23:45:26.368102 3386 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b109c94-4423-43d5-babf-1f349408342e-calico-apiserver-certs\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:26.368143 kubelet[3386]: I0513 23:45:26.368149 3386 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tp5lt\" (UniqueName: \"kubernetes.io/projected/8b109c94-4423-43d5-babf-1f349408342e-kube-api-access-tp5lt\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:26.432370 systemd[1]: Removed slice kubepods-besteffort-pod8b109c94_4423_43d5_babf_1f349408342e.slice - libcontainer container kubepods-besteffort-pod8b109c94_4423_43d5_babf_1f349408342e.slice. May 13 23:45:26.819288 kubelet[3386]: I0513 23:45:26.819063 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b109c94-4423-43d5-babf-1f349408342e" path="/var/lib/kubelet/pods/8b109c94-4423-43d5-babf-1f349408342e/volumes" May 13 23:45:27.177911 containerd[1765]: time="2025-05-13T23:45:27.177351628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:27.223378 containerd[1765]: time="2025-05-13T23:45:27.223294543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 23:45:27.273573 containerd[1765]: time="2025-05-13T23:45:27.273513666Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:27.317462 containerd[1765]: time="2025-05-13T23:45:27.317375097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:27.318337 containerd[1765]: time="2025-05-13T23:45:27.318196499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 5.737874553s" May 13 23:45:27.318337 containerd[1765]: time="2025-05-13T23:45:27.318246459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 23:45:27.321358 containerd[1765]: time="2025-05-13T23:45:27.321325424Z" level=info msg="CreateContainer within sandbox \"9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:45:27.478600 containerd[1765]: time="2025-05-13T23:45:27.478486641Z" level=info msg="Container 63dbad73a8b035869195baafb04ece1c6ad9542299d006a700a3e450051c8605: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:27.483901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242977556.mount: Deactivated successfully. May 13 23:45:27.583014 containerd[1765]: time="2025-05-13T23:45:27.582961852Z" level=info msg="CreateContainer within sandbox \"9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"63dbad73a8b035869195baafb04ece1c6ad9542299d006a700a3e450051c8605\"" May 13 23:45:27.584333 containerd[1765]: time="2025-05-13T23:45:27.583580173Z" level=info msg="StartContainer for \"63dbad73a8b035869195baafb04ece1c6ad9542299d006a700a3e450051c8605\"" May 13 23:45:27.586122 containerd[1765]: time="2025-05-13T23:45:27.586080858Z" level=info msg="connecting to shim 63dbad73a8b035869195baafb04ece1c6ad9542299d006a700a3e450051c8605" address="unix:///run/containerd/s/93dd30ce1d1a13d22688884bc1f743ecfa2a601303a7ccd479788d689f984b32" protocol=ttrpc version=3 May 13 23:45:27.606414 systemd[1]: Started cri-containerd-63dbad73a8b035869195baafb04ece1c6ad9542299d006a700a3e450051c8605.scope - libcontainer container 63dbad73a8b035869195baafb04ece1c6ad9542299d006a700a3e450051c8605. May 13 23:45:27.643761 containerd[1765]: time="2025-05-13T23:45:27.643652712Z" level=info msg="StartContainer for \"63dbad73a8b035869195baafb04ece1c6ad9542299d006a700a3e450051c8605\" returns successfully" May 13 23:45:27.646054 containerd[1765]: time="2025-05-13T23:45:27.645991196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:45:29.101319 containerd[1765]: time="2025-05-13T23:45:29.100752779Z" level=info msg="StopContainer for \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" with timeout 300 (s)" May 13 23:45:29.107533 containerd[1765]: time="2025-05-13T23:45:29.105048906Z" level=info msg="Stop container \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" with signal terminated" May 13 23:45:29.312701 containerd[1765]: time="2025-05-13T23:45:29.312640366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" id:\"5f68f53c775d7dc831b602cef3ddd12141d62fb6017650cb70f1bffacbc5797a\" pid:5921 exited_at:{seconds:1747179929 nanos:312106645}" May 13 23:45:29.375541 containerd[1765]: time="2025-05-13T23:45:29.374608227Z" level=info msg="StopContainer for \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" with timeout 30 (s)" May 13 23:45:29.375541 containerd[1765]: time="2025-05-13T23:45:29.375331268Z" level=info msg="Stop container \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" with signal terminated" May 13 23:45:29.393198 systemd[1]: cri-containerd-426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78.scope: Deactivated successfully. May 13 23:45:29.398571 containerd[1765]: time="2025-05-13T23:45:29.397904185Z" level=info msg="received exit event container_id:\"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" id:\"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" pid:5728 exit_status:2 exited_at:{seconds:1747179929 nanos:397656745}" May 13 23:45:29.398978 containerd[1765]: time="2025-05-13T23:45:29.398939947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" id:\"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" pid:5728 exit_status:2 exited_at:{seconds:1747179929 nanos:397656745}" May 13 23:45:29.446287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78-rootfs.mount: Deactivated successfully. May 13 23:45:29.497591 containerd[1765]: time="2025-05-13T23:45:29.496897908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" id:\"6bc19e90c0504216fa1d684d56ff6aedc43d3eec30c0eba1ab75e74b3e9228e7\" pid:5976 exited_at:{seconds:1747179929 nanos:496156906}" May 13 23:45:29.498901 containerd[1765]: time="2025-05-13T23:45:29.498700951Z" level=info msg="StopContainer for \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" returns successfully" May 13 23:45:29.499809 containerd[1765]: time="2025-05-13T23:45:29.499725952Z" level=info msg="StopPodSandbox for \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\"" May 13 23:45:29.500793 containerd[1765]: time="2025-05-13T23:45:29.500760914Z" level=info msg="Container to stop \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:45:29.519134 systemd[1]: cri-containerd-335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f.scope: Deactivated successfully. May 13 23:45:29.528632 containerd[1765]: time="2025-05-13T23:45:29.528159599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" id:\"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" pid:5152 exit_status:137 exited_at:{seconds:1747179929 nanos:527569638}" May 13 23:45:29.581233 containerd[1765]: time="2025-05-13T23:45:29.579122442Z" level=info msg="shim disconnected" id=335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f namespace=k8s.io May 13 23:45:29.579991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f-rootfs.mount: Deactivated successfully. May 13 23:45:29.581718 containerd[1765]: time="2025-05-13T23:45:29.581455686Z" level=warning msg="cleaning up after shim disconnected" id=335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f namespace=k8s.io May 13 23:45:29.581791 containerd[1765]: time="2025-05-13T23:45:29.579686123Z" level=info msg="received exit event sandbox_id:\"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" exit_status:137 exited_at:{seconds:1747179929 nanos:527569638}" May 13 23:45:29.581939 containerd[1765]: time="2025-05-13T23:45:29.581846967Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:45:29.584143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f-shm.mount: Deactivated successfully. May 13 23:45:29.704870 containerd[1765]: time="2025-05-13T23:45:29.704817768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:29.711323 containerd[1765]: time="2025-05-13T23:45:29.711093098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 23:45:29.718271 containerd[1765]: time="2025-05-13T23:45:29.717154188Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:29.724104 containerd[1765]: time="2025-05-13T23:45:29.723847159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:29.724516 containerd[1765]: time="2025-05-13T23:45:29.724484760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 2.078443164s" May 13 23:45:29.724516 containerd[1765]: time="2025-05-13T23:45:29.724514920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 23:45:29.726799 containerd[1765]: time="2025-05-13T23:45:29.726764804Z" level=info msg="CreateContainer within sandbox \"9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:45:29.764232 containerd[1765]: time="2025-05-13T23:45:29.761782742Z" level=info msg="Container 7f6b8166db987104c4b57da6abf2ce588f9e989e6fd01a9a02268442c2d9f5d9: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:29.783804 containerd[1765]: time="2025-05-13T23:45:29.783763178Z" level=info msg="CreateContainer within sandbox \"9cab2eb399e9a7ec6ab33aff604f6d03b95ffb9cb15443fed22975126152009a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7f6b8166db987104c4b57da6abf2ce588f9e989e6fd01a9a02268442c2d9f5d9\"" May 13 23:45:29.784585 containerd[1765]: time="2025-05-13T23:45:29.784553139Z" level=info msg="StartContainer for \"7f6b8166db987104c4b57da6abf2ce588f9e989e6fd01a9a02268442c2d9f5d9\"" May 13 23:45:29.785954 containerd[1765]: time="2025-05-13T23:45:29.785926501Z" level=info msg="connecting to shim 7f6b8166db987104c4b57da6abf2ce588f9e989e6fd01a9a02268442c2d9f5d9" address="unix:///run/containerd/s/93dd30ce1d1a13d22688884bc1f743ecfa2a601303a7ccd479788d689f984b32" protocol=ttrpc version=3 May 13 23:45:29.810400 systemd[1]: Started cri-containerd-7f6b8166db987104c4b57da6abf2ce588f9e989e6fd01a9a02268442c2d9f5d9.scope - libcontainer container 7f6b8166db987104c4b57da6abf2ce588f9e989e6fd01a9a02268442c2d9f5d9. May 13 23:45:29.851413 containerd[1765]: time="2025-05-13T23:45:29.851302248Z" level=info msg="StartContainer for \"7f6b8166db987104c4b57da6abf2ce588f9e989e6fd01a9a02268442c2d9f5d9\" returns successfully" May 13 23:45:29.987307 kubelet[3386]: I0513 23:45:29.986603 3386 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:45:29.987307 kubelet[3386]: I0513 23:45:29.986648 3386 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:45:30.132096 kubelet[3386]: I0513 23:45:30.132064 3386 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:45:31.150698 systemd-networkd[1332]: calib4ae646a43c: Link DOWN May 13 23:45:31.150705 systemd-networkd[1332]: calib4ae646a43c: Lost carrier May 13 23:45:31.207851 containerd[1765]: time="2025-05-13T23:45:31.207729030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" id:\"1be97115eb15dc916059c0fd62f7112b135ec9d1a3c3ebfc21b8d5165bb22670\" pid:5951 exited_at:{seconds:1747179931 nanos:206083267}" May 13 23:45:31.214120 containerd[1765]: time="2025-05-13T23:45:31.213443399Z" level=info msg="StopContainer for \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" with timeout 4 (s)" May 13 23:45:31.215516 containerd[1765]: time="2025-05-13T23:45:31.215495603Z" level=info msg="Stop container \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" with signal terminated" May 13 23:45:31.243185 systemd[1]: cri-containerd-9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74.scope: Deactivated successfully. May 13 23:45:31.243470 systemd[1]: cri-containerd-9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74.scope: Consumed 1.577s CPU time, 173.9M memory peak, 728K written to disk. May 13 23:45:31.249671 containerd[1765]: time="2025-05-13T23:45:31.249601579Z" level=info msg="received exit event container_id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" pid:4441 exited_at:{seconds:1747179931 nanos:248774057}" May 13 23:45:31.253224 containerd[1765]: time="2025-05-13T23:45:31.250164700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" id:\"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" pid:4441 exited_at:{seconds:1747179931 nanos:248774057}" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.145 [INFO][6066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.147 [INFO][6066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" iface="eth0" netns="/var/run/netns/cni-c9f06bc4-5868-d198-c62f-ae6987090652" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.149 [INFO][6066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" iface="eth0" netns="/var/run/netns/cni-c9f06bc4-5868-d198-c62f-ae6987090652" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.157 [INFO][6066] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" after=10.175377ms iface="eth0" netns="/var/run/netns/cni-c9f06bc4-5868-d198-c62f-ae6987090652" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.158 [INFO][6066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.158 [INFO][6066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.200 [INFO][6116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.200 [INFO][6116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.200 [INFO][6116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.265 [INFO][6116] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.265 [INFO][6116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.267 [INFO][6116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:31.281192 containerd[1765]: 2025-05-13 23:45:31.271 [INFO][6066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:45:31.281192 containerd[1765]: time="2025-05-13T23:45:31.276481743Z" level=info msg="TearDown network for sandbox \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" successfully" May 13 23:45:31.281192 containerd[1765]: time="2025-05-13T23:45:31.276522503Z" level=info msg="StopPodSandbox for \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" returns successfully" May 13 23:45:31.279059 systemd[1]: run-netns-cni\x2dc9f06bc4\x2d5868\x2dd198\x2dc62f\x2dae6987090652.mount: Deactivated successfully. May 13 23:45:31.279151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74-rootfs.mount: Deactivated successfully. May 13 23:45:31.317317 kubelet[3386]: I0513 23:45:31.316686 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kqzwk" podStartSLOduration=45.620372938 podStartE2EDuration="1m4.316667009s" podCreationTimestamp="2025-05-13 23:44:27 +0000 UTC" firstStartedPulling="2025-05-13 23:45:11.029140971 +0000 UTC m=+58.326077580" lastFinishedPulling="2025-05-13 23:45:29.725435082 +0000 UTC m=+77.022371651" observedRunningTime="2025-05-13 23:45:31.162463596 +0000 UTC m=+78.459400205" watchObservedRunningTime="2025-05-13 23:45:31.316667009 +0000 UTC m=+78.613603618" May 13 23:45:31.361870 containerd[1765]: time="2025-05-13T23:45:31.361796682Z" level=info msg="StopContainer for \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" returns successfully" May 13 23:45:31.362988 containerd[1765]: time="2025-05-13T23:45:31.362719364Z" level=info msg="StopPodSandbox for \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\"" May 13 23:45:31.362988 containerd[1765]: time="2025-05-13T23:45:31.362778924Z" level=info msg="Container to stop \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:45:31.362988 containerd[1765]: time="2025-05-13T23:45:31.362790444Z" level=info msg="Container to stop \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:45:31.362988 containerd[1765]: time="2025-05-13T23:45:31.362801324Z" level=info msg="Container to stop \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:45:31.371647 systemd[1]: cri-containerd-060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc.scope: Deactivated successfully. May 13 23:45:31.374560 containerd[1765]: time="2025-05-13T23:45:31.374425143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" id:\"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" pid:3977 exit_status:137 exited_at:{seconds:1747179931 nanos:373334661}" May 13 23:45:31.399465 kubelet[3386]: I0513 23:45:31.398937 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpsrl\" (UniqueName: \"kubernetes.io/projected/447c5b24-e035-4d61-a6dc-6b184c49bb3e-kube-api-access-fpsrl\") pod \"447c5b24-e035-4d61-a6dc-6b184c49bb3e\" (UID: \"447c5b24-e035-4d61-a6dc-6b184c49bb3e\") " May 13 23:45:31.399465 kubelet[3386]: I0513 23:45:31.398980 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447c5b24-e035-4d61-a6dc-6b184c49bb3e-tigera-ca-bundle\") pod \"447c5b24-e035-4d61-a6dc-6b184c49bb3e\" (UID: \"447c5b24-e035-4d61-a6dc-6b184c49bb3e\") " May 13 23:45:31.403349 kubelet[3386]: I0513 23:45:31.402686 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447c5b24-e035-4d61-a6dc-6b184c49bb3e-kube-api-access-fpsrl" (OuterVolumeSpecName: "kube-api-access-fpsrl") pod "447c5b24-e035-4d61-a6dc-6b184c49bb3e" (UID: "447c5b24-e035-4d61-a6dc-6b184c49bb3e"). InnerVolumeSpecName "kube-api-access-fpsrl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:45:31.407878 kubelet[3386]: I0513 23:45:31.406745 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/447c5b24-e035-4d61-a6dc-6b184c49bb3e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "447c5b24-e035-4d61-a6dc-6b184c49bb3e" (UID: "447c5b24-e035-4d61-a6dc-6b184c49bb3e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:45:31.406859 systemd[1]: var-lib-kubelet-pods-447c5b24\x2de035\x2d4d61\x2da6dc\x2d6b184c49bb3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpsrl.mount: Deactivated successfully. May 13 23:45:31.410551 systemd[1]: var-lib-kubelet-pods-447c5b24\x2de035\x2d4d61\x2da6dc\x2d6b184c49bb3e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 13 23:45:31.418596 containerd[1765]: time="2025-05-13T23:45:31.418548255Z" level=info msg="shim disconnected" id=060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc namespace=k8s.io May 13 23:45:31.418689 containerd[1765]: time="2025-05-13T23:45:31.418585815Z" level=warning msg="cleaning up after shim disconnected" id=060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc namespace=k8s.io May 13 23:45:31.418689 containerd[1765]: time="2025-05-13T23:45:31.418614095Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:45:31.419069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc-rootfs.mount: Deactivated successfully. May 13 23:45:31.454545 containerd[1765]: time="2025-05-13T23:45:31.454479274Z" level=info msg="received exit event sandbox_id:\"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" exit_status:137 exited_at:{seconds:1747179931 nanos:373334661}" May 13 23:45:31.454966 containerd[1765]: time="2025-05-13T23:45:31.454860195Z" level=info msg="TearDown network for sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" successfully" May 13 23:45:31.454966 containerd[1765]: time="2025-05-13T23:45:31.454885595Z" level=info msg="StopPodSandbox for \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" returns successfully" May 13 23:45:31.457961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc-shm.mount: Deactivated successfully. May 13 23:45:31.499481 kubelet[3386]: I0513 23:45:31.499435 3386 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fpsrl\" (UniqueName: \"kubernetes.io/projected/447c5b24-e035-4d61-a6dc-6b184c49bb3e-kube-api-access-fpsrl\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.499481 kubelet[3386]: I0513 23:45:31.499468 3386 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447c5b24-e035-4d61-a6dc-6b184c49bb3e-tigera-ca-bundle\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.528726 kubelet[3386]: E0513 23:45:31.528025 3386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56848f67-cdcb-434e-b946-495c63c2981d" containerName="calico-node" May 13 23:45:31.528726 kubelet[3386]: E0513 23:45:31.528058 3386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b109c94-4423-43d5-babf-1f349408342e" containerName="calico-apiserver" May 13 23:45:31.528726 kubelet[3386]: E0513 23:45:31.528069 3386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56848f67-cdcb-434e-b946-495c63c2981d" containerName="install-cni" May 13 23:45:31.528726 kubelet[3386]: E0513 23:45:31.528076 3386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="447c5b24-e035-4d61-a6dc-6b184c49bb3e" containerName="calico-kube-controllers" May 13 23:45:31.528726 kubelet[3386]: E0513 23:45:31.528084 3386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56848f67-cdcb-434e-b946-495c63c2981d" containerName="flexvol-driver" May 13 23:45:31.528726 kubelet[3386]: E0513 23:45:31.528092 3386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6acda27-adaa-464d-9d6d-5b299eb91453" containerName="calico-apiserver" May 13 23:45:31.528726 kubelet[3386]: I0513 23:45:31.528120 3386 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6acda27-adaa-464d-9d6d-5b299eb91453" containerName="calico-apiserver" May 13 23:45:31.528726 kubelet[3386]: I0513 23:45:31.528126 3386 memory_manager.go:354] "RemoveStaleState removing state" podUID="447c5b24-e035-4d61-a6dc-6b184c49bb3e" containerName="calico-kube-controllers" May 13 23:45:31.528726 kubelet[3386]: I0513 23:45:31.528133 3386 memory_manager.go:354] "RemoveStaleState removing state" podUID="56848f67-cdcb-434e-b946-495c63c2981d" containerName="calico-node" May 13 23:45:31.528726 kubelet[3386]: I0513 23:45:31.528139 3386 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b109c94-4423-43d5-babf-1f349408342e" containerName="calico-apiserver" May 13 23:45:31.539654 systemd[1]: Created slice kubepods-besteffort-podaae8d902_5f07_4f98_8fb9_9cc62beec659.slice - libcontainer container kubepods-besteffort-podaae8d902_5f07_4f98_8fb9_9cc62beec659.slice. May 13 23:45:31.601075 kubelet[3386]: I0513 23:45:31.599849 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-run-calico\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601075 kubelet[3386]: I0513 23:45:31.599894 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-lib-modules\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601075 kubelet[3386]: I0513 23:45:31.599911 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-log-dir\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601075 kubelet[3386]: I0513 23:45:31.599925 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-flexvol-driver-host\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601075 kubelet[3386]: I0513 23:45:31.599944 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-policysync\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601075 kubelet[3386]: I0513 23:45:31.599964 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56848f67-cdcb-434e-b946-495c63c2981d-tigera-ca-bundle\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601468 kubelet[3386]: I0513 23:45:31.599979 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-xtables-lock\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601468 kubelet[3386]: I0513 23:45:31.600005 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-lib-calico\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601468 kubelet[3386]: I0513 23:45:31.600029 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6fdt\" (UniqueName: \"kubernetes.io/projected/56848f67-cdcb-434e-b946-495c63c2981d-kube-api-access-p6fdt\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601468 kubelet[3386]: I0513 23:45:31.600048 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56848f67-cdcb-434e-b946-495c63c2981d-node-certs\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601468 kubelet[3386]: I0513 23:45:31.600066 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-bin-dir\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601468 kubelet[3386]: I0513 23:45:31.600073 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.601600 kubelet[3386]: I0513 23:45:31.600083 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-net-dir\") pod \"56848f67-cdcb-434e-b946-495c63c2981d\" (UID: \"56848f67-cdcb-434e-b946-495c63c2981d\") " May 13 23:45:31.601600 kubelet[3386]: I0513 23:45:31.600111 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.601600 kubelet[3386]: I0513 23:45:31.600133 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-policysync" (OuterVolumeSpecName: "policysync") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.601600 kubelet[3386]: I0513 23:45:31.600166 3386 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-net-dir\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.601600 kubelet[3386]: I0513 23:45:31.600180 3386 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-flexvol-driver-host\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.601600 kubelet[3386]: I0513 23:45:31.600203 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.601725 kubelet[3386]: I0513 23:45:31.600295 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.601725 kubelet[3386]: I0513 23:45:31.600314 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.603006 kubelet[3386]: I0513 23:45:31.602732 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.603006 kubelet[3386]: I0513 23:45:31.602794 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.604130 kubelet[3386]: I0513 23:45:31.604013 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:45:31.606924 kubelet[3386]: I0513 23:45:31.606796 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56848f67-cdcb-434e-b946-495c63c2981d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:45:31.607609 systemd[1]: var-lib-kubelet-pods-56848f67\x2dcdcb\x2d434e\x2db946\x2d495c63c2981d-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 13 23:45:31.611494 systemd[1]: var-lib-kubelet-pods-56848f67\x2dcdcb\x2d434e\x2db946\x2d495c63c2981d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp6fdt.mount: Deactivated successfully. May 13 23:45:31.615585 kubelet[3386]: I0513 23:45:31.615448 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56848f67-cdcb-434e-b946-495c63c2981d-node-certs" (OuterVolumeSpecName: "node-certs") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:45:31.615585 kubelet[3386]: I0513 23:45:31.615534 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56848f67-cdcb-434e-b946-495c63c2981d-kube-api-access-p6fdt" (OuterVolumeSpecName: "kube-api-access-p6fdt") pod "56848f67-cdcb-434e-b946-495c63c2981d" (UID: "56848f67-cdcb-434e-b946-495c63c2981d"). InnerVolumeSpecName "kube-api-access-p6fdt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:45:31.616828 systemd[1]: var-lib-kubelet-pods-56848f67\x2dcdcb\x2d434e\x2db946\x2d495c63c2981d-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 13 23:45:31.701409 kubelet[3386]: I0513 23:45:31.701283 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-cni-net-dir\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702320 kubelet[3386]: I0513 23:45:31.702289 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-flexvol-driver-host\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702379 kubelet[3386]: I0513 23:45:31.702351 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-xtables-lock\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702406 kubelet[3386]: I0513 23:45:31.702375 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-lib-modules\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702430 kubelet[3386]: I0513 23:45:31.702395 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aae8d902-5f07-4f98-8fb9-9cc62beec659-node-certs\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702452 kubelet[3386]: I0513 23:45:31.702434 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-var-lib-calico\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702474 kubelet[3386]: I0513 23:45:31.702450 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdpsq\" (UniqueName: \"kubernetes.io/projected/aae8d902-5f07-4f98-8fb9-9cc62beec659-kube-api-access-kdpsq\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702501 kubelet[3386]: I0513 23:45:31.702465 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-cni-bin-dir\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702501 kubelet[3386]: I0513 23:45:31.702494 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae8d902-5f07-4f98-8fb9-9cc62beec659-tigera-ca-bundle\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702546 kubelet[3386]: I0513 23:45:31.702510 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-var-run-calico\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702546 kubelet[3386]: I0513 23:45:31.702524 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-cni-log-dir\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702546 kubelet[3386]: I0513 23:45:31.702542 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aae8d902-5f07-4f98-8fb9-9cc62beec659-policysync\") pod \"calico-node-g2zvk\" (UID: \"aae8d902-5f07-4f98-8fb9-9cc62beec659\") " pod="calico-system/calico-node-g2zvk" May 13 23:45:31.702611 kubelet[3386]: I0513 23:45:31.702564 3386 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-bin-dir\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702611 kubelet[3386]: I0513 23:45:31.702573 3386 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-run-calico\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702611 kubelet[3386]: I0513 23:45:31.702581 3386 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-lib-modules\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702611 kubelet[3386]: I0513 23:45:31.702590 3386 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-cni-log-dir\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702611 kubelet[3386]: I0513 23:45:31.702598 3386 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-policysync\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702611 kubelet[3386]: I0513 23:45:31.702606 3386 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56848f67-cdcb-434e-b946-495c63c2981d-tigera-ca-bundle\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702731 kubelet[3386]: I0513 23:45:31.702614 3386 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-xtables-lock\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702731 kubelet[3386]: I0513 23:45:31.702623 3386 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56848f67-cdcb-434e-b946-495c63c2981d-var-lib-calico\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702731 kubelet[3386]: I0513 23:45:31.702631 3386 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p6fdt\" (UniqueName: \"kubernetes.io/projected/56848f67-cdcb-434e-b946-495c63c2981d-kube-api-access-p6fdt\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.702731 kubelet[3386]: I0513 23:45:31.702640 3386 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56848f67-cdcb-434e-b946-495c63c2981d-node-certs\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:31.843385 containerd[1765]: time="2025-05-13T23:45:31.843344111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g2zvk,Uid:aae8d902-5f07-4f98-8fb9-9cc62beec659,Namespace:calico-system,Attempt:0,}" May 13 23:45:31.909795 containerd[1765]: time="2025-05-13T23:45:31.909522220Z" level=info msg="connecting to shim 61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca" address="unix:///run/containerd/s/9bde12a9f17062c4678e373bfeca6a9bd5be3c5f9a0715a60a64e94d7cdde0d4" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:31.928391 systemd[1]: Started cri-containerd-61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca.scope - libcontainer container 61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca. May 13 23:45:31.969540 containerd[1765]: time="2025-05-13T23:45:31.968836957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g2zvk,Uid:aae8d902-5f07-4f98-8fb9-9cc62beec659,Namespace:calico-system,Attempt:0,} returns sandbox id \"61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca\"" May 13 23:45:31.972444 containerd[1765]: time="2025-05-13T23:45:31.972414723Z" level=info msg="CreateContainer within sandbox \"61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:45:32.011251 containerd[1765]: time="2025-05-13T23:45:32.011060986Z" level=info msg="Container ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:32.035413 containerd[1765]: time="2025-05-13T23:45:32.035368786Z" level=info msg="CreateContainer within sandbox \"61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c\"" May 13 23:45:32.038845 containerd[1765]: time="2025-05-13T23:45:32.036162067Z" level=info msg="StartContainer for \"ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c\"" May 13 23:45:32.038845 containerd[1765]: time="2025-05-13T23:45:32.038020550Z" level=info msg="connecting to shim ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c" address="unix:///run/containerd/s/9bde12a9f17062c4678e373bfeca6a9bd5be3c5f9a0715a60a64e94d7cdde0d4" protocol=ttrpc version=3 May 13 23:45:32.059395 systemd[1]: Started cri-containerd-ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c.scope - libcontainer container ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c. May 13 23:45:32.106158 containerd[1765]: time="2025-05-13T23:45:32.106116022Z" level=info msg="StartContainer for \"ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c\" returns successfully" May 13 23:45:32.116507 systemd[1]: cri-containerd-ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c.scope: Deactivated successfully. May 13 23:45:32.117358 systemd[1]: cri-containerd-ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c.scope: Consumed 27ms CPU time, 7.8M memory peak, 6.2M written to disk. May 13 23:45:32.120859 containerd[1765]: time="2025-05-13T23:45:32.120714166Z" level=info msg="received exit event container_id:\"ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c\" id:\"ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c\" pid:6253 exited_at:{seconds:1747179932 nanos:120364765}" May 13 23:45:32.120859 containerd[1765]: time="2025-05-13T23:45:32.120770486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c\" id:\"ba4d6f18e5ff2468584e68b31af682aa3718530107114d21bc259bc3153a3c7c\" pid:6253 exited_at:{seconds:1747179932 nanos:120364765}" May 13 23:45:32.157813 kubelet[3386]: I0513 23:45:32.155459 3386 scope.go:117] "RemoveContainer" containerID="9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74" May 13 23:45:32.160837 systemd[1]: Removed slice kubepods-besteffort-pod447c5b24_e035_4d61_a6dc_6b184c49bb3e.slice - libcontainer container kubepods-besteffort-pod447c5b24_e035_4d61_a6dc_6b184c49bb3e.slice. May 13 23:45:32.180885 containerd[1765]: time="2025-05-13T23:45:32.177758859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" id:\"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" pid:4108 exit_status:1 exited_at:{seconds:1747179932 nanos:175117935}" May 13 23:45:32.180885 containerd[1765]: time="2025-05-13T23:45:32.177759579Z" level=info msg="received exit event container_id:\"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" id:\"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" pid:4108 exit_status:1 exited_at:{seconds:1747179932 nanos:175117935}" May 13 23:45:32.163484 systemd[1]: Removed slice kubepods-besteffort-pod56848f67_cdcb_434e_b946_495c63c2981d.slice - libcontainer container kubepods-besteffort-pod56848f67_cdcb_434e_b946_495c63c2981d.slice. May 13 23:45:32.163596 systemd[1]: kubepods-besteffort-pod56848f67_cdcb_434e_b946_495c63c2981d.slice: Consumed 1.967s CPU time, 288.9M memory peak, 157.2M written to disk. May 13 23:45:32.173543 systemd[1]: cri-containerd-198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456.scope: Deactivated successfully. May 13 23:45:32.186525 containerd[1765]: time="2025-05-13T23:45:32.186438393Z" level=info msg="RemoveContainer for \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\"" May 13 23:45:32.207503 containerd[1765]: time="2025-05-13T23:45:32.207388068Z" level=info msg="RemoveContainer for \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" returns successfully" May 13 23:45:32.207636 kubelet[3386]: I0513 23:45:32.207575 3386 scope.go:117] "RemoveContainer" containerID="ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0" May 13 23:45:32.213067 containerd[1765]: time="2025-05-13T23:45:32.213026357Z" level=info msg="RemoveContainer for \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\"" May 13 23:45:32.236275 containerd[1765]: time="2025-05-13T23:45:32.236085275Z" level=info msg="RemoveContainer for \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" returns successfully" May 13 23:45:32.236746 kubelet[3386]: I0513 23:45:32.236713 3386 scope.go:117] "RemoveContainer" containerID="a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa" May 13 23:45:32.242191 containerd[1765]: time="2025-05-13T23:45:32.242157365Z" level=info msg="RemoveContainer for \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\"" May 13 23:45:32.256685 containerd[1765]: time="2025-05-13T23:45:32.256631948Z" level=info msg="RemoveContainer for \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" returns successfully" May 13 23:45:32.257123 kubelet[3386]: I0513 23:45:32.257022 3386 scope.go:117] "RemoveContainer" containerID="9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74" May 13 23:45:32.257871 containerd[1765]: time="2025-05-13T23:45:32.257800790Z" level=error msg="ContainerStatus for \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\": not found" May 13 23:45:32.258110 kubelet[3386]: E0513 23:45:32.258036 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\": not found" containerID="9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74" May 13 23:45:32.259091 kubelet[3386]: I0513 23:45:32.259042 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74"} err="failed to get container status \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f9a0a276012c133da6b96891d59ee1c73e5821ebd10e09b2febdd6dad620a74\": not found" May 13 23:45:32.259091 kubelet[3386]: I0513 23:45:32.259086 3386 scope.go:117] "RemoveContainer" containerID="ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0" May 13 23:45:32.259357 containerd[1765]: time="2025-05-13T23:45:32.259318713Z" level=error msg="ContainerStatus for \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\": not found" May 13 23:45:32.259891 kubelet[3386]: E0513 23:45:32.259592 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\": not found" containerID="ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0" May 13 23:45:32.259891 kubelet[3386]: I0513 23:45:32.259619 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0"} err="failed to get container status \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce96683993a52ab385d2cee16561226a8d6626ecbd0d1d2bae825371c6346be0\": not found" May 13 23:45:32.259891 kubelet[3386]: I0513 23:45:32.259637 3386 scope.go:117] "RemoveContainer" containerID="a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa" May 13 23:45:32.259992 containerd[1765]: time="2025-05-13T23:45:32.259870354Z" level=error msg="ContainerStatus for \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\": not found" May 13 23:45:32.260690 kubelet[3386]: E0513 23:45:32.260641 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\": not found" containerID="a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa" May 13 23:45:32.260690 kubelet[3386]: I0513 23:45:32.260669 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa"} err="failed to get container status \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5ae8f226e7484bee7144cf2980142e572bf35d7444ad9ed79997bade50067aa\": not found" May 13 23:45:32.261483 containerd[1765]: time="2025-05-13T23:45:32.261377036Z" level=info msg="StopContainer for \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" returns successfully" May 13 23:45:32.261725 containerd[1765]: time="2025-05-13T23:45:32.261678836Z" level=info msg="StopPodSandbox for \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\"" May 13 23:45:32.261782 containerd[1765]: time="2025-05-13T23:45:32.261741077Z" level=info msg="Container to stop \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:45:32.271588 systemd[1]: cri-containerd-97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182.scope: Deactivated successfully. May 13 23:45:32.278124 containerd[1765]: time="2025-05-13T23:45:32.277092142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" id:\"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" pid:4023 exit_status:137 exited_at:{seconds:1747179932 nanos:274142177}" May 13 23:45:32.311414 containerd[1765]: time="2025-05-13T23:45:32.311361758Z" level=info msg="received exit event sandbox_id:\"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" exit_status:137 exited_at:{seconds:1747179932 nanos:274142177}" May 13 23:45:32.312664 containerd[1765]: time="2025-05-13T23:45:32.312635000Z" level=info msg="shim disconnected" id=97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182 namespace=k8s.io May 13 23:45:32.312727 containerd[1765]: time="2025-05-13T23:45:32.312662400Z" level=warning msg="cleaning up after shim disconnected" id=97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182 namespace=k8s.io May 13 23:45:32.312727 containerd[1765]: time="2025-05-13T23:45:32.312686840Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:45:32.314089 containerd[1765]: time="2025-05-13T23:45:32.314009322Z" level=info msg="TearDown network for sandbox \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" successfully" May 13 23:45:32.314089 containerd[1765]: time="2025-05-13T23:45:32.314038322Z" level=info msg="StopPodSandbox for \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" returns successfully" May 13 23:45:32.407705 kubelet[3386]: I0513 23:45:32.407672 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgtl8\" (UniqueName: \"kubernetes.io/projected/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-kube-api-access-pgtl8\") pod \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\" (UID: \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\") " May 13 23:45:32.408735 kubelet[3386]: I0513 23:45:32.408171 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-tigera-ca-bundle\") pod \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\" (UID: \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\") " May 13 23:45:32.408735 kubelet[3386]: I0513 23:45:32.408237 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-typha-certs\") pod \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\" (UID: \"e0ad5a15-3ec0-4420-8855-56d5eb85ddf6\") " May 13 23:45:32.412324 kubelet[3386]: I0513 23:45:32.412283 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-kube-api-access-pgtl8" (OuterVolumeSpecName: "kube-api-access-pgtl8") pod "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6" (UID: "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6"). InnerVolumeSpecName "kube-api-access-pgtl8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:45:32.412840 kubelet[3386]: I0513 23:45:32.412808 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6" (UID: "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:45:32.413031 kubelet[3386]: I0513 23:45:32.413001 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6" (UID: "e0ad5a15-3ec0-4420-8855-56d5eb85ddf6"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:45:32.461409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456-rootfs.mount: Deactivated successfully. May 13 23:45:32.461499 systemd[1]: var-lib-kubelet-pods-e0ad5a15\x2d3ec0\x2d4420\x2d8855\x2d56d5eb85ddf6-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 13 23:45:32.461556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182-rootfs.mount: Deactivated successfully. May 13 23:45:32.461604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182-shm.mount: Deactivated successfully. May 13 23:45:32.461655 systemd[1]: var-lib-kubelet-pods-e0ad5a15\x2d3ec0\x2d4420\x2d8855\x2d56d5eb85ddf6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpgtl8.mount: Deactivated successfully. May 13 23:45:32.461705 systemd[1]: var-lib-kubelet-pods-e0ad5a15\x2d3ec0\x2d4420\x2d8855\x2d56d5eb85ddf6-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 13 23:45:32.509507 kubelet[3386]: I0513 23:45:32.509369 3386 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-typha-certs\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:32.509507 kubelet[3386]: I0513 23:45:32.509403 3386 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pgtl8\" (UniqueName: \"kubernetes.io/projected/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-kube-api-access-pgtl8\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:32.509507 kubelet[3386]: I0513 23:45:32.509413 3386 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6-tigera-ca-bundle\") on node \"ci-4284.0.0-n-13ce75130c\" DevicePath \"\"" May 13 23:45:32.818814 kubelet[3386]: I0513 23:45:32.818713 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447c5b24-e035-4d61-a6dc-6b184c49bb3e" path="/var/lib/kubelet/pods/447c5b24-e035-4d61-a6dc-6b184c49bb3e/volumes" May 13 23:45:32.819381 kubelet[3386]: I0513 23:45:32.819351 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56848f67-cdcb-434e-b946-495c63c2981d" path="/var/lib/kubelet/pods/56848f67-cdcb-434e-b946-495c63c2981d/volumes" May 13 23:45:32.823753 systemd[1]: Removed slice kubepods-besteffort-pode0ad5a15_3ec0_4420_8855_56d5eb85ddf6.slice - libcontainer container kubepods-besteffort-pode0ad5a15_3ec0_4420_8855_56d5eb85ddf6.slice. May 13 23:45:33.156523 kubelet[3386]: I0513 23:45:33.156451 3386 scope.go:117] "RemoveContainer" containerID="198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456" May 13 23:45:33.162150 containerd[1765]: time="2025-05-13T23:45:33.161739951Z" level=info msg="RemoveContainer for \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\"" May 13 23:45:33.164538 containerd[1765]: time="2025-05-13T23:45:33.164479115Z" level=info msg="CreateContainer within sandbox \"61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:45:33.171887 containerd[1765]: time="2025-05-13T23:45:33.171852127Z" level=info msg="RemoveContainer for \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" returns successfully" May 13 23:45:33.173463 kubelet[3386]: I0513 23:45:33.173317 3386 scope.go:117] "RemoveContainer" containerID="198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456" May 13 23:45:33.173883 containerd[1765]: time="2025-05-13T23:45:33.173701650Z" level=error msg="ContainerStatus for \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\": not found" May 13 23:45:33.173945 kubelet[3386]: E0513 23:45:33.173816 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\": not found" containerID="198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456" May 13 23:45:33.173945 kubelet[3386]: I0513 23:45:33.173839 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456"} err="failed to get container status \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\": rpc error: code = NotFound desc = an error occurred when try to find container \"198bb8a364856142afc13efd700ad1b217b7c2af41bd43fbf71c2b9ac069e456\": not found" May 13 23:45:33.199064 containerd[1765]: time="2025-05-13T23:45:33.194487164Z" level=info msg="Container c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:33.228435 containerd[1765]: time="2025-05-13T23:45:33.228288580Z" level=info msg="CreateContainer within sandbox \"61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca\"" May 13 23:45:33.229781 containerd[1765]: time="2025-05-13T23:45:33.229752342Z" level=info msg="StartContainer for \"c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca\"" May 13 23:45:33.231167 containerd[1765]: time="2025-05-13T23:45:33.231123624Z" level=info msg="connecting to shim c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca" address="unix:///run/containerd/s/9bde12a9f17062c4678e373bfeca6a9bd5be3c5f9a0715a60a64e94d7cdde0d4" protocol=ttrpc version=3 May 13 23:45:33.252368 systemd[1]: Started cri-containerd-c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca.scope - libcontainer container c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca. May 13 23:45:33.292236 containerd[1765]: time="2025-05-13T23:45:33.292153364Z" level=info msg="StartContainer for \"c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca\" returns successfully" May 13 23:45:33.740630 containerd[1765]: time="2025-05-13T23:45:33.740569899Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/10-calico.conflist\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 13 23:45:33.743436 systemd[1]: cri-containerd-c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca.scope: Deactivated successfully. May 13 23:45:33.743726 systemd[1]: cri-containerd-c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca.scope: Consumed 508ms CPU time, 54.9M memory peak, 34.4M read from disk. May 13 23:45:33.747389 containerd[1765]: time="2025-05-13T23:45:33.747093750Z" level=info msg="received exit event container_id:\"c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca\" id:\"c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca\" pid:6356 exited_at:{seconds:1747179933 nanos:746791749}" May 13 23:45:33.747618 containerd[1765]: time="2025-05-13T23:45:33.747184990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca\" id:\"c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca\" pid:6356 exited_at:{seconds:1747179933 nanos:746791749}" May 13 23:45:33.766088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c143713b4258662987c30817001cb2e427b27c2d4f10860f32fba223060a3fca-rootfs.mount: Deactivated successfully. May 13 23:45:34.056968 kubelet[3386]: E0513 23:45:34.055372 3386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0ad5a15-3ec0-4420-8855-56d5eb85ddf6" containerName="calico-typha" May 13 23:45:34.056968 kubelet[3386]: I0513 23:45:34.055423 3386 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ad5a15-3ec0-4420-8855-56d5eb85ddf6" containerName="calico-typha" May 13 23:45:34.065574 systemd[1]: Created slice kubepods-besteffort-podb2cd6d84_b78a_4330_8edd_3691d2fa26e3.slice - libcontainer container kubepods-besteffort-podb2cd6d84_b78a_4330_8edd_3691d2fa26e3.slice. May 13 23:45:34.188013 containerd[1765]: time="2025-05-13T23:45:34.187949992Z" level=info msg="CreateContainer within sandbox \"61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:45:34.222676 kubelet[3386]: I0513 23:45:34.222624 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg5l6\" (UniqueName: \"kubernetes.io/projected/b2cd6d84-b78a-4330-8edd-3691d2fa26e3-kube-api-access-rg5l6\") pod \"calico-typha-589f5bc5fb-4w9wx\" (UID: \"b2cd6d84-b78a-4330-8edd-3691d2fa26e3\") " pod="calico-system/calico-typha-589f5bc5fb-4w9wx" May 13 23:45:34.222676 kubelet[3386]: I0513 23:45:34.222681 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2cd6d84-b78a-4330-8edd-3691d2fa26e3-tigera-ca-bundle\") pod \"calico-typha-589f5bc5fb-4w9wx\" (UID: \"b2cd6d84-b78a-4330-8edd-3691d2fa26e3\") " pod="calico-system/calico-typha-589f5bc5fb-4w9wx" May 13 23:45:34.222843 kubelet[3386]: I0513 23:45:34.222706 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b2cd6d84-b78a-4330-8edd-3691d2fa26e3-typha-certs\") pod \"calico-typha-589f5bc5fb-4w9wx\" (UID: \"b2cd6d84-b78a-4330-8edd-3691d2fa26e3\") " pod="calico-system/calico-typha-589f5bc5fb-4w9wx" May 13 23:45:34.227984 containerd[1765]: time="2025-05-13T23:45:34.227930377Z" level=info msg="Container 2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:34.254341 containerd[1765]: time="2025-05-13T23:45:34.254198782Z" level=info msg="CreateContainer within sandbox \"61df49f7ef82fbbaef0f23446316857c2c9fdc825d83198ce345f0da0e4341ca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\"" May 13 23:45:34.256552 containerd[1765]: time="2025-05-13T23:45:34.255560344Z" level=info msg="StartContainer for \"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\"" May 13 23:45:34.257363 containerd[1765]: time="2025-05-13T23:45:34.257337547Z" level=info msg="connecting to shim 2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28" address="unix:///run/containerd/s/9bde12a9f17062c4678e373bfeca6a9bd5be3c5f9a0715a60a64e94d7cdde0d4" protocol=ttrpc version=3 May 13 23:45:34.280377 systemd[1]: Started cri-containerd-2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28.scope - libcontainer container 2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28. May 13 23:45:34.331367 containerd[1765]: time="2025-05-13T23:45:34.329029149Z" level=info msg="StartContainer for \"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" returns successfully" May 13 23:45:34.369123 containerd[1765]: time="2025-05-13T23:45:34.369078617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589f5bc5fb-4w9wx,Uid:b2cd6d84-b78a-4330-8edd-3691d2fa26e3,Namespace:calico-system,Attempt:0,}" May 13 23:45:34.448700 containerd[1765]: time="2025-05-13T23:45:34.447916711Z" level=info msg="connecting to shim 4c6ad195384b003dca266f5b270f4cd688280be57e1914213c28ad57cd86603d" address="unix:///run/containerd/s/012b28d3c02216e3e282f3558fd2f50669ba5eef6fb6047b3a0a4a01c2c981a4" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:34.466711 systemd[1]: Created slice kubepods-besteffort-poda5dfe5c4_ff79_43c8_9bff_e58135810c7c.slice - libcontainer container kubepods-besteffort-poda5dfe5c4_ff79_43c8_9bff_e58135810c7c.slice. May 13 23:45:34.499394 systemd[1]: Started cri-containerd-4c6ad195384b003dca266f5b270f4cd688280be57e1914213c28ad57cd86603d.scope - libcontainer container 4c6ad195384b003dca266f5b270f4cd688280be57e1914213c28ad57cd86603d. May 13 23:45:34.562419 containerd[1765]: time="2025-05-13T23:45:34.562193026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589f5bc5fb-4w9wx,Uid:b2cd6d84-b78a-4330-8edd-3691d2fa26e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c6ad195384b003dca266f5b270f4cd688280be57e1914213c28ad57cd86603d\"" May 13 23:45:34.570242 containerd[1765]: time="2025-05-13T23:45:34.570036279Z" level=info msg="CreateContainer within sandbox \"4c6ad195384b003dca266f5b270f4cd688280be57e1914213c28ad57cd86603d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:45:34.597302 containerd[1765]: time="2025-05-13T23:45:34.597110685Z" level=info msg="Container e27328d87240e13eefa5195bb88560d352cd85818397301ca28dcfdce09fdd43: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:34.613111 containerd[1765]: time="2025-05-13T23:45:34.612965672Z" level=info msg="CreateContainer within sandbox \"4c6ad195384b003dca266f5b270f4cd688280be57e1914213c28ad57cd86603d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e27328d87240e13eefa5195bb88560d352cd85818397301ca28dcfdce09fdd43\"" May 13 23:45:34.613477 containerd[1765]: time="2025-05-13T23:45:34.613456753Z" level=info msg="StartContainer for \"e27328d87240e13eefa5195bb88560d352cd85818397301ca28dcfdce09fdd43\"" May 13 23:45:34.614837 containerd[1765]: time="2025-05-13T23:45:34.614648635Z" level=info msg="connecting to shim e27328d87240e13eefa5195bb88560d352cd85818397301ca28dcfdce09fdd43" address="unix:///run/containerd/s/012b28d3c02216e3e282f3558fd2f50669ba5eef6fb6047b3a0a4a01c2c981a4" protocol=ttrpc version=3 May 13 23:45:34.626047 kubelet[3386]: I0513 23:45:34.626008 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq7p4\" (UniqueName: \"kubernetes.io/projected/a5dfe5c4-ff79-43c8-9bff-e58135810c7c-kube-api-access-dq7p4\") pod \"calico-kube-controllers-64657c8897-vmbr4\" (UID: \"a5dfe5c4-ff79-43c8-9bff-e58135810c7c\") " pod="calico-system/calico-kube-controllers-64657c8897-vmbr4" May 13 23:45:34.626147 kubelet[3386]: I0513 23:45:34.626063 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5dfe5c4-ff79-43c8-9bff-e58135810c7c-tigera-ca-bundle\") pod \"calico-kube-controllers-64657c8897-vmbr4\" (UID: \"a5dfe5c4-ff79-43c8-9bff-e58135810c7c\") " pod="calico-system/calico-kube-controllers-64657c8897-vmbr4" May 13 23:45:34.640454 systemd[1]: Started cri-containerd-e27328d87240e13eefa5195bb88560d352cd85818397301ca28dcfdce09fdd43.scope - libcontainer container e27328d87240e13eefa5195bb88560d352cd85818397301ca28dcfdce09fdd43. May 13 23:45:34.695179 containerd[1765]: time="2025-05-13T23:45:34.695137852Z" level=info msg="StartContainer for \"e27328d87240e13eefa5195bb88560d352cd85818397301ca28dcfdce09fdd43\" returns successfully" May 13 23:45:34.772139 containerd[1765]: time="2025-05-13T23:45:34.772097863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64657c8897-vmbr4,Uid:a5dfe5c4-ff79-43c8-9bff-e58135810c7c,Namespace:calico-system,Attempt:0,}" May 13 23:45:34.825895 kubelet[3386]: I0513 23:45:34.825840 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ad5a15-3ec0-4420-8855-56d5eb85ddf6" path="/var/lib/kubelet/pods/e0ad5a15-3ec0-4420-8855-56d5eb85ddf6/volumes" May 13 23:45:34.954295 systemd-networkd[1332]: cali1c09c492344: Link UP May 13 23:45:34.956477 systemd-networkd[1332]: cali1c09c492344: Gained carrier May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.878 [INFO][6524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0 calico-kube-controllers-64657c8897- calico-system a5dfe5c4-ff79-43c8-9bff-e58135810c7c 1125 0 2025-05-13 23:45:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64657c8897 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-13ce75130c calico-kube-controllers-64657c8897-vmbr4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c09c492344 [] []}} ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.878 [INFO][6524] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.906 [INFO][6537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" HandleID="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.918 [INFO][6537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" HandleID="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-13ce75130c", "pod":"calico-kube-controllers-64657c8897-vmbr4", "timestamp":"2025-05-13 23:45:34.906715292 +0000 UTC"}, Hostname:"ci-4284.0.0-n-13ce75130c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.918 [INFO][6537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.918 [INFO][6537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.918 [INFO][6537] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-13ce75130c' May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.920 [INFO][6537] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.923 [INFO][6537] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.927 [INFO][6537] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.928 [INFO][6537] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.931 [INFO][6537] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.931 [INFO][6537] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.933 [INFO][6537] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4 May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.938 [INFO][6537] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.948 [INFO][6537] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.201/26] block=192.168.119.192/26 handle="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.948 [INFO][6537] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.201/26] handle="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" host="ci-4284.0.0-n-13ce75130c" May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.948 [INFO][6537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:45:34.970183 containerd[1765]: 2025-05-13 23:45:34.948 [INFO][6537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.201/26] IPv6=[] ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" HandleID="k8s-pod-network.aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" May 13 23:45:34.970721 containerd[1765]: 2025-05-13 23:45:34.950 [INFO][6524] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0", GenerateName:"calico-kube-controllers-64657c8897-", Namespace:"calico-system", SelfLink:"", UID:"a5dfe5c4-ff79-43c8-9bff-e58135810c7c", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64657c8897", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"", Pod:"calico-kube-controllers-64657c8897-vmbr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c09c492344", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:34.970721 containerd[1765]: 2025-05-13 23:45:34.950 [INFO][6524] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.201/32] ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" May 13 23:45:34.970721 containerd[1765]: 2025-05-13 23:45:34.950 [INFO][6524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c09c492344 ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" May 13 23:45:34.970721 containerd[1765]: 2025-05-13 23:45:34.954 [INFO][6524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" May 13 23:45:34.970721 containerd[1765]: 2025-05-13 23:45:34.954 [INFO][6524] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0", GenerateName:"calico-kube-controllers-64657c8897-", Namespace:"calico-system", SelfLink:"", UID:"a5dfe5c4-ff79-43c8-9bff-e58135810c7c", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64657c8897", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-13ce75130c", ContainerID:"aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4", Pod:"calico-kube-controllers-64657c8897-vmbr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c09c492344", MAC:"e6:38:b4:fd:a6:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:45:34.970721 containerd[1765]: 2025-05-13 23:45:34.967 [INFO][6524] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" Namespace="calico-system" Pod="calico-kube-controllers-64657c8897-vmbr4" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--64657c8897--vmbr4-eth0" May 13 23:45:35.040953 containerd[1765]: time="2025-05-13T23:45:35.040705360Z" level=info msg="connecting to shim aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4" address="unix:///run/containerd/s/75ae3e716de0a5431d4b26e12e46ca4865e7f66514146912d907b3c742af0892" namespace=k8s.io protocol=ttrpc version=3 May 13 23:45:35.062409 systemd[1]: Started cri-containerd-aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4.scope - libcontainer container aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4. May 13 23:45:35.101581 containerd[1765]: time="2025-05-13T23:45:35.101247143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64657c8897-vmbr4,Uid:a5dfe5c4-ff79-43c8-9bff-e58135810c7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4\"" May 13 23:45:35.112616 containerd[1765]: time="2025-05-13T23:45:35.112572682Z" level=info msg="CreateContainer within sandbox \"aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:45:35.144941 containerd[1765]: time="2025-05-13T23:45:35.144891057Z" level=info msg="Container 4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:35.164409 containerd[1765]: time="2025-05-13T23:45:35.164366770Z" level=info msg="CreateContainer within sandbox \"aad7b1f6b5510275b6313f5c266a3f4e268093da34ef827b409b22fdf2f534d4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\"" May 13 23:45:35.165701 containerd[1765]: time="2025-05-13T23:45:35.165591212Z" level=info msg="StartContainer for \"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\"" May 13 23:45:35.167509 containerd[1765]: time="2025-05-13T23:45:35.167427816Z" level=info msg="connecting to shim 4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449" address="unix:///run/containerd/s/75ae3e716de0a5431d4b26e12e46ca4865e7f66514146912d907b3c742af0892" protocol=ttrpc version=3 May 13 23:45:35.203393 systemd[1]: Started cri-containerd-4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449.scope - libcontainer container 4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449. May 13 23:45:35.216083 kubelet[3386]: I0513 23:45:35.215143 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g2zvk" podStartSLOduration=4.215126377 podStartE2EDuration="4.215126377s" podCreationTimestamp="2025-05-13 23:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:45:35.214855336 +0000 UTC m=+82.511791945" watchObservedRunningTime="2025-05-13 23:45:35.215126377 +0000 UTC m=+82.512063066" May 13 23:45:35.235620 kubelet[3386]: I0513 23:45:35.235449 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-589f5bc5fb-4w9wx" podStartSLOduration=6.235431771 podStartE2EDuration="6.235431771s" podCreationTimestamp="2025-05-13 23:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:45:35.233595808 +0000 UTC m=+82.530532417" watchObservedRunningTime="2025-05-13 23:45:35.235431771 +0000 UTC m=+82.532368380" May 13 23:45:35.297511 containerd[1765]: time="2025-05-13T23:45:35.297445637Z" level=info msg="StartContainer for \"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" returns successfully" May 13 23:45:35.320450 containerd[1765]: time="2025-05-13T23:45:35.320157075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"3f8ccb2e1fa75584dc6a1f8108697dc477306126935004f345aebe88838a2d72\" pid:6629 exit_status:1 exited_at:{seconds:1747179935 nanos:319640115}" May 13 23:45:36.251119 kubelet[3386]: I0513 23:45:36.249927 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64657c8897-vmbr4" podStartSLOduration=4.249910178 podStartE2EDuration="4.249910178s" podCreationTimestamp="2025-05-13 23:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:45:36.249802937 +0000 UTC m=+83.546739546" watchObservedRunningTime="2025-05-13 23:45:36.249910178 +0000 UTC m=+83.546846787" May 13 23:45:36.282112 containerd[1765]: time="2025-05-13T23:45:36.281689472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"212b433b3c5aab45fb45c55455b6f4b356b4989abac1b310390942aa85bcc124\" pid:6825 exit_status:1 exited_at:{seconds:1747179936 nanos:281372831}" May 13 23:45:36.314259 containerd[1765]: time="2025-05-13T23:45:36.313997447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"d3d80e92dae1f18985ba911358d37d366b787603cd63b2e7e98f951ebba0c216\" pid:6838 exit_status:1 exited_at:{seconds:1747179936 nanos:313553086}" May 13 23:45:36.632399 systemd-networkd[1332]: cali1c09c492344: Gained IPv6LL May 13 23:45:37.256453 containerd[1765]: time="2025-05-13T23:45:37.256411810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"14ded5f5f2aa7cc939bbbdaf941e57971d3aff33dce8b6478f293f5fac7ed8e1\" pid:6900 exit_status:1 exited_at:{seconds:1747179937 nanos:256179490}" May 13 23:45:59.295923 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:52318.service - OpenSSH per-connection server daemon (10.200.16.10:52318). May 13 23:45:59.765633 sshd[6943]: Accepted publickey for core from 10.200.16.10 port 52318 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:45:59.766079 sshd-session[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:59.772991 systemd-logind[1726]: New session 10 of user core. May 13 23:45:59.776386 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:46:00.174705 sshd[6945]: Connection closed by 10.200.16.10 port 52318 May 13 23:46:00.175288 sshd-session[6943]: pam_unix(sshd:session): session closed for user core May 13 23:46:00.178047 systemd-logind[1726]: Session 10 logged out. Waiting for processes to exit. May 13 23:46:00.179483 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:52318.service: Deactivated successfully. May 13 23:46:00.181374 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:46:00.182375 systemd-logind[1726]: Removed session 10. May 13 23:46:01.899710 containerd[1765]: time="2025-05-13T23:46:01.899646189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"6065b40c6117e9fb2f0cace6eed249eca1d3090b3645ea0e8fa80d8546e1de2c\" pid:6970 exited_at:{seconds:1747179961 nanos:899320028}" May 13 23:46:04.800377 containerd[1765]: time="2025-05-13T23:46:04.800271681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"1ec61ddf5b49c2e6cf13f6aefe616677f0b43b83706bfdbc7d9f7eae76288274\" pid:6994 exited_at:{seconds:1747179964 nanos:799514040}" May 13 23:46:05.272442 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:52326.service - OpenSSH per-connection server daemon (10.200.16.10:52326). May 13 23:46:05.767693 sshd[7004]: Accepted publickey for core from 10.200.16.10 port 52326 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:05.769014 sshd-session[7004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:05.773444 systemd-logind[1726]: New session 11 of user core. May 13 23:46:05.782354 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:46:06.188710 sshd[7006]: Connection closed by 10.200.16.10 port 52326 May 13 23:46:06.189281 sshd-session[7004]: pam_unix(sshd:session): session closed for user core May 13 23:46:06.192844 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:52326.service: Deactivated successfully. May 13 23:46:06.196142 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:46:06.197004 systemd-logind[1726]: Session 11 logged out. Waiting for processes to exit. May 13 23:46:06.198473 systemd-logind[1726]: Removed session 11. May 13 23:46:11.287447 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:33704.service - OpenSSH per-connection server daemon (10.200.16.10:33704). May 13 23:46:11.784453 sshd[7022]: Accepted publickey for core from 10.200.16.10 port 33704 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:11.785791 sshd-session[7022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:11.790395 systemd-logind[1726]: New session 12 of user core. May 13 23:46:11.800368 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:46:12.210296 sshd[7024]: Connection closed by 10.200.16.10 port 33704 May 13 23:46:12.210850 sshd-session[7022]: pam_unix(sshd:session): session closed for user core May 13 23:46:12.213777 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:33704.service: Deactivated successfully. May 13 23:46:12.216020 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:46:12.217609 systemd-logind[1726]: Session 12 logged out. Waiting for processes to exit. May 13 23:46:12.218887 systemd-logind[1726]: Removed session 12. May 13 23:46:12.840561 kubelet[3386]: I0513 23:46:12.840529 3386 scope.go:117] "RemoveContainer" containerID="426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78" May 13 23:46:12.843538 containerd[1765]: time="2025-05-13T23:46:12.842818231Z" level=info msg="RemoveContainer for \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\"" May 13 23:46:12.881748 containerd[1765]: time="2025-05-13T23:46:12.881638898Z" level=info msg="RemoveContainer for \"426560840e1683e7d240cb9c61c8f93b9e97bc050608def159c49436cf1b6f78\" returns successfully" May 13 23:46:12.883362 containerd[1765]: time="2025-05-13T23:46:12.882969061Z" level=info msg="StopPodSandbox for \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\"" May 13 23:46:12.883362 containerd[1765]: time="2025-05-13T23:46:12.883091941Z" level=info msg="TearDown network for sandbox \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" successfully" May 13 23:46:12.883362 containerd[1765]: time="2025-05-13T23:46:12.883105101Z" level=info msg="StopPodSandbox for \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" returns successfully" May 13 23:46:12.883736 containerd[1765]: time="2025-05-13T23:46:12.883703022Z" level=info msg="RemovePodSandbox for \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\"" May 13 23:46:12.884058 containerd[1765]: time="2025-05-13T23:46:12.883909302Z" level=info msg="Forcibly stopping sandbox \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\"" May 13 23:46:12.884058 containerd[1765]: time="2025-05-13T23:46:12.884010622Z" level=info msg="TearDown network for sandbox \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" successfully" May 13 23:46:12.885548 containerd[1765]: time="2025-05-13T23:46:12.885481105Z" level=info msg="Ensure that sandbox 97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182 in task-service has been cleanup successfully" May 13 23:46:13.020715 containerd[1765]: time="2025-05-13T23:46:13.020668981Z" level=info msg="RemovePodSandbox \"97feee1093cc65044cbbb7f681e5d8efbee43dccbed7c7ab89fd871fc71f6182\" returns successfully" May 13 23:46:13.021286 containerd[1765]: time="2025-05-13T23:46:13.021262582Z" level=info msg="StopPodSandbox for \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\"" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.056 [WARNING][7050] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.056 [INFO][7050] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.056 [INFO][7050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" iface="eth0" netns="" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.056 [INFO][7050] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.056 [INFO][7050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.073 [INFO][7057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.073 [INFO][7057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.073 [INFO][7057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.083 [WARNING][7057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.083 [INFO][7057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.085 [INFO][7057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:46:13.088155 containerd[1765]: 2025-05-13 23:46:13.086 [INFO][7050] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.089244 containerd[1765]: time="2025-05-13T23:46:13.088598899Z" level=info msg="TearDown network for sandbox \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" successfully" May 13 23:46:13.089244 containerd[1765]: time="2025-05-13T23:46:13.088634259Z" level=info msg="StopPodSandbox for \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" returns successfully" May 13 23:46:13.089244 containerd[1765]: time="2025-05-13T23:46:13.089084580Z" level=info msg="RemovePodSandbox for \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\"" May 13 23:46:13.089244 containerd[1765]: time="2025-05-13T23:46:13.089111460Z" level=info msg="Forcibly stopping sandbox \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\"" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.120 [WARNING][7076] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.120 [INFO][7076] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.120 [INFO][7076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" iface="eth0" netns="" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.120 [INFO][7076] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.120 [INFO][7076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.137 [INFO][7083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.137 [INFO][7083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.137 [INFO][7083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.145 [WARNING][7083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.145 [INFO][7083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" HandleID="k8s-pod-network.9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--8p7bb-eth0" May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.147 [INFO][7083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:46:13.150237 containerd[1765]: 2025-05-13 23:46:13.148 [INFO][7076] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13" May 13 23:46:13.150237 containerd[1765]: time="2025-05-13T23:46:13.149946966Z" level=info msg="TearDown network for sandbox \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" successfully" May 13 23:46:13.152247 containerd[1765]: time="2025-05-13T23:46:13.152037209Z" level=info msg="Ensure that sandbox 9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13 in task-service has been cleanup successfully" May 13 23:46:13.226982 containerd[1765]: time="2025-05-13T23:46:13.226839460Z" level=info msg="RemovePodSandbox \"9f29430652d5b769a018dee2442047168cddaceabeb62407f213ca841f894e13\" returns successfully" May 13 23:46:13.227599 containerd[1765]: time="2025-05-13T23:46:13.227423141Z" level=info msg="StopPodSandbox for \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\"" May 13 23:46:13.227599 containerd[1765]: time="2025-05-13T23:46:13.227541781Z" level=info msg="TearDown network for sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" successfully" May 13 23:46:13.227599 containerd[1765]: time="2025-05-13T23:46:13.227552541Z" level=info msg="StopPodSandbox for \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" returns successfully" May 13 23:46:13.228165 containerd[1765]: time="2025-05-13T23:46:13.227977022Z" level=info msg="RemovePodSandbox for \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\"" May 13 23:46:13.228165 containerd[1765]: time="2025-05-13T23:46:13.228016942Z" level=info msg="Forcibly stopping sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\"" May 13 23:46:13.228165 containerd[1765]: time="2025-05-13T23:46:13.228086502Z" level=info msg="TearDown network for sandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" successfully" May 13 23:46:13.229850 containerd[1765]: time="2025-05-13T23:46:13.229671145Z" level=info msg="Ensure that sandbox 060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc in task-service has been cleanup successfully" May 13 23:46:13.380820 containerd[1765]: time="2025-05-13T23:46:13.380682368Z" level=info msg="RemovePodSandbox \"060b54a169e181e66ca5e8b15de1254fb536d8fee8304b163ab260faae3ff5dc\" returns successfully" May 13 23:46:13.381621 containerd[1765]: time="2025-05-13T23:46:13.381271929Z" level=info msg="StopPodSandbox for \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\"" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.413 [WARNING][7101] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.413 [INFO][7101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.413 [INFO][7101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" iface="eth0" netns="" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.413 [INFO][7101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.413 [INFO][7101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.431 [INFO][7108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.431 [INFO][7108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.431 [INFO][7108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.440 [WARNING][7108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.440 [INFO][7108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.441 [INFO][7108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:46:13.444996 containerd[1765]: 2025-05-13 23:46:13.443 [INFO][7101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.444996 containerd[1765]: time="2025-05-13T23:46:13.444871599Z" level=info msg="TearDown network for sandbox \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" successfully" May 13 23:46:13.444996 containerd[1765]: time="2025-05-13T23:46:13.444894959Z" level=info msg="StopPodSandbox for \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" returns successfully" May 13 23:46:13.446391 containerd[1765]: time="2025-05-13T23:46:13.446348882Z" level=info msg="RemovePodSandbox for \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\"" May 13 23:46:13.446391 containerd[1765]: time="2025-05-13T23:46:13.446388002Z" level=info msg="Forcibly stopping sandbox \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\"" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.478 [WARNING][7126] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.478 [INFO][7126] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.478 [INFO][7126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" iface="eth0" netns="" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.478 [INFO][7126] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.478 [INFO][7126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.497 [INFO][7133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.497 [INFO][7133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.497 [INFO][7133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.508 [WARNING][7133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.508 [INFO][7133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" HandleID="k8s-pod-network.335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--kube--controllers--7dcd59c5cd--jhn7j-eth0" May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.509 [INFO][7133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:46:13.512308 containerd[1765]: 2025-05-13 23:46:13.510 [INFO][7126] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f" May 13 23:46:13.512765 containerd[1765]: time="2025-05-13T23:46:13.512352637Z" level=info msg="TearDown network for sandbox \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" successfully" May 13 23:46:13.513831 containerd[1765]: time="2025-05-13T23:46:13.513799040Z" level=info msg="Ensure that sandbox 335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f in task-service has been cleanup successfully" May 13 23:46:13.570786 containerd[1765]: time="2025-05-13T23:46:13.570738219Z" level=info msg="RemovePodSandbox \"335ac64180dc9115b0fb2bde1ffe7e4a55db9bd8c19b8e19d5eec22532597b5f\" returns successfully" May 13 23:46:13.571537 containerd[1765]: time="2025-05-13T23:46:13.571193779Z" level=info msg="StopPodSandbox for \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\"" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.607 [WARNING][7151] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.607 [INFO][7151] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.607 [INFO][7151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" iface="eth0" netns="" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.607 [INFO][7151] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.607 [INFO][7151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.626 [INFO][7159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.626 [INFO][7159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.626 [INFO][7159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.634 [WARNING][7159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.634 [INFO][7159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.636 [INFO][7159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:46:13.638888 containerd[1765]: 2025-05-13 23:46:13.637 [INFO][7151] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.639435 containerd[1765]: time="2025-05-13T23:46:13.638931497Z" level=info msg="TearDown network for sandbox \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" successfully" May 13 23:46:13.639435 containerd[1765]: time="2025-05-13T23:46:13.638955738Z" level=info msg="StopPodSandbox for \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" returns successfully" May 13 23:46:13.639435 containerd[1765]: time="2025-05-13T23:46:13.639413378Z" level=info msg="RemovePodSandbox for \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\"" May 13 23:46:13.639530 containerd[1765]: time="2025-05-13T23:46:13.639441618Z" level=info msg="Forcibly stopping sandbox \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\"" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.672 [WARNING][7177] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" WorkloadEndpoint="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.673 [INFO][7177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.673 [INFO][7177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" iface="eth0" netns="" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.673 [INFO][7177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.673 [INFO][7177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.689 [INFO][7184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.690 [INFO][7184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.690 [INFO][7184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.699 [WARNING][7184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.699 [INFO][7184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" HandleID="k8s-pod-network.3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" Workload="ci--4284.0.0--n--13ce75130c-k8s-calico--apiserver--6c659f749--ltpvp-eth0" May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.700 [INFO][7184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:46:13.703720 containerd[1765]: 2025-05-13 23:46:13.702 [INFO][7177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117" May 13 23:46:13.703720 containerd[1765]: time="2025-05-13T23:46:13.703615810Z" level=info msg="TearDown network for sandbox \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" successfully" May 13 23:46:13.708256 containerd[1765]: time="2025-05-13T23:46:13.707471977Z" level=info msg="Ensure that sandbox 3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117 in task-service has been cleanup successfully" May 13 23:46:13.785773 containerd[1765]: time="2025-05-13T23:46:13.785627393Z" level=info msg="RemovePodSandbox \"3655b60746356b748c7952dc3de82e9f998587cd29a03340b53bc8882f79e117\" returns successfully" May 13 23:46:17.299229 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:33708.service - OpenSSH per-connection server daemon (10.200.16.10:33708). May 13 23:46:17.785164 sshd[7199]: Accepted publickey for core from 10.200.16.10 port 33708 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:17.786769 sshd-session[7199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:17.790896 systemd-logind[1726]: New session 13 of user core. May 13 23:46:17.795345 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:46:18.210532 sshd[7201]: Connection closed by 10.200.16.10 port 33708 May 13 23:46:18.210899 sshd-session[7199]: pam_unix(sshd:session): session closed for user core May 13 23:46:18.214620 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:33708.service: Deactivated successfully. May 13 23:46:18.216305 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:46:18.217000 systemd-logind[1726]: Session 13 logged out. Waiting for processes to exit. May 13 23:46:18.218129 systemd-logind[1726]: Removed session 13. May 13 23:46:23.292855 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:55446.service - OpenSSH per-connection server daemon (10.200.16.10:55446). May 13 23:46:23.748510 sshd[7217]: Accepted publickey for core from 10.200.16.10 port 55446 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:23.749848 sshd-session[7217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:23.753950 systemd-logind[1726]: New session 14 of user core. May 13 23:46:23.762340 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:46:24.151121 sshd[7219]: Connection closed by 10.200.16.10 port 55446 May 13 23:46:24.151681 sshd-session[7217]: pam_unix(sshd:session): session closed for user core May 13 23:46:24.155261 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:55446.service: Deactivated successfully. May 13 23:46:24.157573 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:46:24.158490 systemd-logind[1726]: Session 14 logged out. Waiting for processes to exit. May 13 23:46:24.159617 systemd-logind[1726]: Removed session 14. May 13 23:46:24.232362 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:55448.service - OpenSSH per-connection server daemon (10.200.16.10:55448). May 13 23:46:24.692130 sshd[7232]: Accepted publickey for core from 10.200.16.10 port 55448 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:24.693471 sshd-session[7232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:24.697849 systemd-logind[1726]: New session 15 of user core. May 13 23:46:24.704356 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:46:25.137247 sshd[7234]: Connection closed by 10.200.16.10 port 55448 May 13 23:46:25.137755 sshd-session[7232]: pam_unix(sshd:session): session closed for user core May 13 23:46:25.141084 systemd-logind[1726]: Session 15 logged out. Waiting for processes to exit. May 13 23:46:25.141831 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:55448.service: Deactivated successfully. May 13 23:46:25.144879 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:46:25.146444 systemd-logind[1726]: Removed session 15. May 13 23:46:25.229912 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:55462.service - OpenSSH per-connection server daemon (10.200.16.10:55462). May 13 23:46:25.688180 sshd[7243]: Accepted publickey for core from 10.200.16.10 port 55462 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:25.689907 sshd-session[7243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:25.694865 systemd-logind[1726]: New session 16 of user core. May 13 23:46:25.700379 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:46:26.077814 sshd[7245]: Connection closed by 10.200.16.10 port 55462 May 13 23:46:26.078572 sshd-session[7243]: pam_unix(sshd:session): session closed for user core May 13 23:46:26.081591 systemd-logind[1726]: Session 16 logged out. Waiting for processes to exit. May 13 23:46:26.081719 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:55462.service: Deactivated successfully. May 13 23:46:26.083487 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:46:26.086015 systemd-logind[1726]: Removed session 16. May 13 23:46:31.170347 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:33608.service - OpenSSH per-connection server daemon (10.200.16.10:33608). May 13 23:46:31.666167 sshd[7256]: Accepted publickey for core from 10.200.16.10 port 33608 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:31.667469 sshd-session[7256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:31.671469 systemd-logind[1726]: New session 17 of user core. May 13 23:46:31.676353 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:46:31.900253 containerd[1765]: time="2025-05-13T23:46:31.900010524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"450dfbed37c12bb1f0be416649fefa7fc61e27b7f56728cdb906a22b8b92fe2f\" pid:7272 exited_at:{seconds:1747179991 nanos:899707323}" May 13 23:46:32.095512 sshd[7258]: Connection closed by 10.200.16.10 port 33608 May 13 23:46:32.099676 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:33608.service: Deactivated successfully. May 13 23:46:32.096240 sshd-session[7256]: pam_unix(sshd:session): session closed for user core May 13 23:46:32.102525 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:46:32.103891 systemd-logind[1726]: Session 17 logged out. Waiting for processes to exit. May 13 23:46:32.105097 systemd-logind[1726]: Removed session 17. May 13 23:46:34.809808 containerd[1765]: time="2025-05-13T23:46:34.809698499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"9177a91c4626a46c94b0035762357ed4c9277e0c3a23fb12782ba11c1d440034\" pid:7318 exited_at:{seconds:1747179994 nanos:808165176}" May 13 23:46:34.812550 containerd[1765]: time="2025-05-13T23:46:34.812516263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"b213b976f71c288fc3630e46c6f8163b7cf88a5c4c9c8fc04c6db6fe16a58629\" pid:7319 exited_at:{seconds:1747179994 nanos:812071183}" May 13 23:46:37.183856 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:33612.service - OpenSSH per-connection server daemon (10.200.16.10:33612). May 13 23:46:37.670101 sshd[7337]: Accepted publickey for core from 10.200.16.10 port 33612 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:37.671661 sshd-session[7337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:37.676757 systemd-logind[1726]: New session 18 of user core. May 13 23:46:37.682372 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:46:38.097882 sshd[7339]: Connection closed by 10.200.16.10 port 33612 May 13 23:46:38.098416 sshd-session[7337]: pam_unix(sshd:session): session closed for user core May 13 23:46:38.101472 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:33612.service: Deactivated successfully. May 13 23:46:38.103834 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:46:38.105932 systemd-logind[1726]: Session 18 logged out. Waiting for processes to exit. May 13 23:46:38.106926 systemd-logind[1726]: Removed session 18. May 13 23:46:43.181028 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:38450.service - OpenSSH per-connection server daemon (10.200.16.10:38450). May 13 23:46:43.640844 sshd[7351]: Accepted publickey for core from 10.200.16.10 port 38450 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:43.642199 sshd-session[7351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:43.647372 systemd-logind[1726]: New session 19 of user core. May 13 23:46:43.651372 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:46:44.049502 sshd[7353]: Connection closed by 10.200.16.10 port 38450 May 13 23:46:44.049350 sshd-session[7351]: pam_unix(sshd:session): session closed for user core May 13 23:46:44.052500 systemd-logind[1726]: Session 19 logged out. Waiting for processes to exit. May 13 23:46:44.052760 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:38450.service: Deactivated successfully. May 13 23:46:44.055846 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:46:44.057948 systemd-logind[1726]: Removed session 19. May 13 23:46:49.132852 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:60862.service - OpenSSH per-connection server daemon (10.200.16.10:60862). May 13 23:46:49.596245 sshd[7367]: Accepted publickey for core from 10.200.16.10 port 60862 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:49.597589 sshd-session[7367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:49.601624 systemd-logind[1726]: New session 20 of user core. May 13 23:46:49.614344 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:46:49.984058 sshd[7369]: Connection closed by 10.200.16.10 port 60862 May 13 23:46:49.984467 sshd-session[7367]: pam_unix(sshd:session): session closed for user core May 13 23:46:49.988584 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:60862.service: Deactivated successfully. May 13 23:46:49.990727 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:46:49.992816 systemd-logind[1726]: Session 20 logged out. Waiting for processes to exit. May 13 23:46:49.993695 systemd-logind[1726]: Removed session 20. May 13 23:46:55.067795 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:60874.service - OpenSSH per-connection server daemon (10.200.16.10:60874). May 13 23:46:55.531289 sshd[7381]: Accepted publickey for core from 10.200.16.10 port 60874 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:55.533012 sshd-session[7381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:55.537790 systemd-logind[1726]: New session 21 of user core. May 13 23:46:55.543501 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:46:55.936378 sshd[7383]: Connection closed by 10.200.16.10 port 60874 May 13 23:46:55.937109 sshd-session[7381]: pam_unix(sshd:session): session closed for user core May 13 23:46:55.940656 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:60874.service: Deactivated successfully. May 13 23:46:55.940935 systemd-logind[1726]: Session 21 logged out. Waiting for processes to exit. May 13 23:46:55.943233 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:46:55.944770 systemd-logind[1726]: Removed session 21. May 13 23:47:01.025176 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:51380.service - OpenSSH per-connection server daemon (10.200.16.10:51380). May 13 23:47:01.513602 sshd[7407]: Accepted publickey for core from 10.200.16.10 port 51380 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:01.514891 sshd-session[7407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:01.520334 systemd-logind[1726]: New session 22 of user core. May 13 23:47:01.524361 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:47:01.894245 containerd[1765]: time="2025-05-13T23:47:01.894169227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"63fdc9363ae2f253a0c6347ac6122655875ae0ba83de1854d6bb7635e9a65ca0\" pid:7428 exit_status:1 exited_at:{seconds:1747180021 nanos:893666506}" May 13 23:47:01.922059 sshd[7409]: Connection closed by 10.200.16.10 port 51380 May 13 23:47:01.921968 sshd-session[7407]: pam_unix(sshd:session): session closed for user core May 13 23:47:01.925001 systemd-logind[1726]: Session 22 logged out. Waiting for processes to exit. May 13 23:47:01.925180 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:51380.service: Deactivated successfully. May 13 23:47:01.926834 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:47:01.928997 systemd-logind[1726]: Removed session 22. May 13 23:47:04.803189 containerd[1765]: time="2025-05-13T23:47:04.803020060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"4d4860b2615e9288d79cb825163fb51f283570db623fc064d1db90ac12e704e9\" pid:7456 exited_at:{seconds:1747180024 nanos:802770980}" May 13 23:47:07.014453 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:51394.service - OpenSSH per-connection server daemon (10.200.16.10:51394). May 13 23:47:07.503753 sshd[7466]: Accepted publickey for core from 10.200.16.10 port 51394 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:07.505031 sshd-session[7466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:07.509292 systemd-logind[1726]: New session 23 of user core. May 13 23:47:07.517347 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:47:07.923768 sshd[7468]: Connection closed by 10.200.16.10 port 51394 May 13 23:47:07.924362 sshd-session[7466]: pam_unix(sshd:session): session closed for user core May 13 23:47:07.927807 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:51394.service: Deactivated successfully. May 13 23:47:07.930168 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:47:07.931559 systemd-logind[1726]: Session 23 logged out. Waiting for processes to exit. May 13 23:47:07.932367 systemd-logind[1726]: Removed session 23. May 13 23:47:13.021781 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:44858.service - OpenSSH per-connection server daemon (10.200.16.10:44858). May 13 23:47:13.510938 sshd[7500]: Accepted publickey for core from 10.200.16.10 port 44858 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:13.512384 sshd-session[7500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:13.517508 systemd-logind[1726]: New session 24 of user core. May 13 23:47:13.525419 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:47:13.929559 sshd[7505]: Connection closed by 10.200.16.10 port 44858 May 13 23:47:13.929088 sshd-session[7500]: pam_unix(sshd:session): session closed for user core May 13 23:47:13.932524 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:44858.service: Deactivated successfully. May 13 23:47:13.934867 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:47:13.936065 systemd-logind[1726]: Session 24 logged out. Waiting for processes to exit. May 13 23:47:13.938871 systemd-logind[1726]: Removed session 24. May 13 23:47:19.011252 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:42994.service - OpenSSH per-connection server daemon (10.200.16.10:42994). May 13 23:47:19.476752 sshd[7518]: Accepted publickey for core from 10.200.16.10 port 42994 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:19.478098 sshd-session[7518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:19.482751 systemd-logind[1726]: New session 25 of user core. May 13 23:47:19.487403 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:47:19.872336 sshd[7520]: Connection closed by 10.200.16.10 port 42994 May 13 23:47:19.872153 sshd-session[7518]: pam_unix(sshd:session): session closed for user core May 13 23:47:19.875844 systemd-logind[1726]: Session 25 logged out. Waiting for processes to exit. May 13 23:47:19.876464 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:42994.service: Deactivated successfully. May 13 23:47:19.878792 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:47:19.879840 systemd-logind[1726]: Removed session 25. May 13 23:47:24.962168 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:43002.service - OpenSSH per-connection server daemon (10.200.16.10:43002). May 13 23:47:25.448799 sshd[7532]: Accepted publickey for core from 10.200.16.10 port 43002 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:25.450527 sshd-session[7532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:25.457735 systemd-logind[1726]: New session 26 of user core. May 13 23:47:25.462373 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:47:25.866494 sshd[7534]: Connection closed by 10.200.16.10 port 43002 May 13 23:47:25.866663 sshd-session[7532]: pam_unix(sshd:session): session closed for user core May 13 23:47:25.871508 systemd-logind[1726]: Session 26 logged out. Waiting for processes to exit. May 13 23:47:25.871723 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:43002.service: Deactivated successfully. May 13 23:47:25.873869 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:47:25.875052 systemd-logind[1726]: Removed session 26. May 13 23:47:30.949147 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.16.10:58312.service - OpenSSH per-connection server daemon (10.200.16.10:58312). May 13 23:47:31.413086 sshd[7546]: Accepted publickey for core from 10.200.16.10 port 58312 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:31.414704 sshd-session[7546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:31.419883 systemd-logind[1726]: New session 27 of user core. May 13 23:47:31.423368 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 23:47:31.833593 sshd[7548]: Connection closed by 10.200.16.10 port 58312 May 13 23:47:31.834355 sshd-session[7546]: pam_unix(sshd:session): session closed for user core May 13 23:47:31.837828 systemd[1]: sshd@24-10.200.20.40:22-10.200.16.10:58312.service: Deactivated successfully. May 13 23:47:31.840782 systemd[1]: session-27.scope: Deactivated successfully. May 13 23:47:31.841857 systemd-logind[1726]: Session 27 logged out. Waiting for processes to exit. May 13 23:47:31.843158 systemd-logind[1726]: Removed session 27. May 13 23:47:31.898525 containerd[1765]: time="2025-05-13T23:47:31.898483265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"88e445bfc20104e4d3a08824a714372d1111f51480a8bf6fcbbc2b944e141129\" pid:7571 exited_at:{seconds:1747180051 nanos:897959344}" May 13 23:47:31.925020 systemd[1]: Started sshd@25-10.200.20.40:22-10.200.16.10:58320.service - OpenSSH per-connection server daemon (10.200.16.10:58320). May 13 23:47:32.422436 sshd[7585]: Accepted publickey for core from 10.200.16.10 port 58320 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:32.423831 sshd-session[7585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:32.428400 systemd-logind[1726]: New session 28 of user core. May 13 23:47:32.432361 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 23:47:32.962989 sshd[7587]: Connection closed by 10.200.16.10 port 58320 May 13 23:47:32.963709 sshd-session[7585]: pam_unix(sshd:session): session closed for user core May 13 23:47:32.966549 systemd-logind[1726]: Session 28 logged out. Waiting for processes to exit. May 13 23:47:32.966709 systemd[1]: sshd@25-10.200.20.40:22-10.200.16.10:58320.service: Deactivated successfully. May 13 23:47:32.968532 systemd[1]: session-28.scope: Deactivated successfully. May 13 23:47:32.970399 systemd-logind[1726]: Removed session 28. May 13 23:47:33.047824 systemd[1]: Started sshd@26-10.200.20.40:22-10.200.16.10:58324.service - OpenSSH per-connection server daemon (10.200.16.10:58324). May 13 23:47:33.540082 sshd[7597]: Accepted publickey for core from 10.200.16.10 port 58324 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:33.541661 sshd-session[7597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:33.545641 systemd-logind[1726]: New session 29 of user core. May 13 23:47:33.553374 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 23:47:34.928737 containerd[1765]: time="2025-05-13T23:47:34.928690646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"6c86aaa9868e80d9ceced06bca9a5184a2ed101406042702e2644173539e79a5\" pid:7631 exited_at:{seconds:1747180054 nanos:928464166}" May 13 23:47:34.935425 containerd[1765]: time="2025-05-13T23:47:34.935275496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"6e6536097dcfa563ac2cb0000cc64642bfd00d49707c340ac32babe1d87ed46d\" pid:7643 exited_at:{seconds:1747180054 nanos:934995896}" May 13 23:47:35.592727 sshd[7599]: Connection closed by 10.200.16.10 port 58324 May 13 23:47:35.593597 sshd-session[7597]: pam_unix(sshd:session): session closed for user core May 13 23:47:35.596968 systemd[1]: sshd@26-10.200.20.40:22-10.200.16.10:58324.service: Deactivated successfully. May 13 23:47:35.598940 systemd[1]: session-29.scope: Deactivated successfully. May 13 23:47:35.600184 systemd[1]: session-29.scope: Consumed 460ms CPU time, 66.2M memory peak. May 13 23:47:35.601437 systemd-logind[1726]: Session 29 logged out. Waiting for processes to exit. May 13 23:47:35.602392 systemd-logind[1726]: Removed session 29. May 13 23:47:35.675563 systemd[1]: Started sshd@27-10.200.20.40:22-10.200.16.10:58326.service - OpenSSH per-connection server daemon (10.200.16.10:58326). May 13 23:47:36.133555 sshd[7663]: Accepted publickey for core from 10.200.16.10 port 58326 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:36.134920 sshd-session[7663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:36.139380 systemd-logind[1726]: New session 30 of user core. May 13 23:47:36.148373 systemd[1]: Started session-30.scope - Session 30 of User core. May 13 23:47:36.632379 sshd[7666]: Connection closed by 10.200.16.10 port 58326 May 13 23:47:36.631896 sshd-session[7663]: pam_unix(sshd:session): session closed for user core May 13 23:47:36.635444 systemd[1]: sshd@27-10.200.20.40:22-10.200.16.10:58326.service: Deactivated successfully. May 13 23:47:36.637791 systemd[1]: session-30.scope: Deactivated successfully. May 13 23:47:36.638843 systemd-logind[1726]: Session 30 logged out. Waiting for processes to exit. May 13 23:47:36.640114 systemd-logind[1726]: Removed session 30. May 13 23:47:36.724144 systemd[1]: Started sshd@28-10.200.20.40:22-10.200.16.10:58336.service - OpenSSH per-connection server daemon (10.200.16.10:58336). May 13 23:47:37.211148 sshd[7676]: Accepted publickey for core from 10.200.16.10 port 58336 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:37.212474 sshd-session[7676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:37.217235 systemd-logind[1726]: New session 31 of user core. May 13 23:47:37.223419 systemd[1]: Started session-31.scope - Session 31 of User core. May 13 23:47:37.616974 sshd[7678]: Connection closed by 10.200.16.10 port 58336 May 13 23:47:37.617669 sshd-session[7676]: pam_unix(sshd:session): session closed for user core May 13 23:47:37.621030 systemd[1]: sshd@28-10.200.20.40:22-10.200.16.10:58336.service: Deactivated successfully. May 13 23:47:37.622784 systemd[1]: session-31.scope: Deactivated successfully. May 13 23:47:37.623980 systemd-logind[1726]: Session 31 logged out. Waiting for processes to exit. May 13 23:47:37.625011 systemd-logind[1726]: Removed session 31. May 13 23:47:42.705193 systemd[1]: Started sshd@29-10.200.20.40:22-10.200.16.10:41074.service - OpenSSH per-connection server daemon (10.200.16.10:41074). May 13 23:47:43.168455 sshd[7690]: Accepted publickey for core from 10.200.16.10 port 41074 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:43.169817 sshd-session[7690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:43.174500 systemd-logind[1726]: New session 32 of user core. May 13 23:47:43.177352 systemd[1]: Started session-32.scope - Session 32 of User core. May 13 23:47:43.553249 sshd[7692]: Connection closed by 10.200.16.10 port 41074 May 13 23:47:43.554093 sshd-session[7690]: pam_unix(sshd:session): session closed for user core May 13 23:47:43.557357 systemd[1]: sshd@29-10.200.20.40:22-10.200.16.10:41074.service: Deactivated successfully. May 13 23:47:43.561328 systemd[1]: session-32.scope: Deactivated successfully. May 13 23:47:43.562005 systemd-logind[1726]: Session 32 logged out. Waiting for processes to exit. May 13 23:47:43.563322 systemd-logind[1726]: Removed session 32. May 13 23:47:48.637325 systemd[1]: Started sshd@30-10.200.20.40:22-10.200.16.10:33568.service - OpenSSH per-connection server daemon (10.200.16.10:33568). May 13 23:47:49.099951 sshd[7704]: Accepted publickey for core from 10.200.16.10 port 33568 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:49.101616 sshd-session[7704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:49.105910 systemd-logind[1726]: New session 33 of user core. May 13 23:47:49.111523 systemd[1]: Started session-33.scope - Session 33 of User core. May 13 23:47:49.486789 sshd[7708]: Connection closed by 10.200.16.10 port 33568 May 13 23:47:49.487410 sshd-session[7704]: pam_unix(sshd:session): session closed for user core May 13 23:47:49.492112 systemd-logind[1726]: Session 33 logged out. Waiting for processes to exit. May 13 23:47:49.492577 systemd[1]: sshd@30-10.200.20.40:22-10.200.16.10:33568.service: Deactivated successfully. May 13 23:47:49.494792 systemd[1]: session-33.scope: Deactivated successfully. May 13 23:47:49.496167 systemd-logind[1726]: Removed session 33. May 13 23:47:54.578443 systemd[1]: Started sshd@31-10.200.20.40:22-10.200.16.10:33576.service - OpenSSH per-connection server daemon (10.200.16.10:33576). May 13 23:47:55.072885 sshd[7722]: Accepted publickey for core from 10.200.16.10 port 33576 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:55.074939 sshd-session[7722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:55.080336 systemd-logind[1726]: New session 34 of user core. May 13 23:47:55.088458 systemd[1]: Started session-34.scope - Session 34 of User core. May 13 23:47:55.477065 sshd[7724]: Connection closed by 10.200.16.10 port 33576 May 13 23:47:55.477677 sshd-session[7722]: pam_unix(sshd:session): session closed for user core May 13 23:47:55.480965 systemd[1]: sshd@31-10.200.20.40:22-10.200.16.10:33576.service: Deactivated successfully. May 13 23:47:55.482679 systemd[1]: session-34.scope: Deactivated successfully. May 13 23:47:55.483470 systemd-logind[1726]: Session 34 logged out. Waiting for processes to exit. May 13 23:47:55.484359 systemd-logind[1726]: Removed session 34. May 13 23:48:00.557792 systemd[1]: Started sshd@32-10.200.20.40:22-10.200.16.10:39638.service - OpenSSH per-connection server daemon (10.200.16.10:39638). May 13 23:48:01.014121 sshd[7735]: Accepted publickey for core from 10.200.16.10 port 39638 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:48:01.015531 sshd-session[7735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:01.020436 systemd-logind[1726]: New session 35 of user core. May 13 23:48:01.024402 systemd[1]: Started session-35.scope - Session 35 of User core. May 13 23:48:01.426365 sshd[7737]: Connection closed by 10.200.16.10 port 39638 May 13 23:48:01.426941 sshd-session[7735]: pam_unix(sshd:session): session closed for user core May 13 23:48:01.430354 systemd[1]: sshd@32-10.200.20.40:22-10.200.16.10:39638.service: Deactivated successfully. May 13 23:48:01.433088 systemd[1]: session-35.scope: Deactivated successfully. May 13 23:48:01.434426 systemd-logind[1726]: Session 35 logged out. Waiting for processes to exit. May 13 23:48:01.436114 systemd-logind[1726]: Removed session 35. May 13 23:48:01.897273 containerd[1765]: time="2025-05-13T23:48:01.897136832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"be9fa4ac08c2f45e431dfceb2a8c4957fafd5caafee581dbf47cfdce081f6af1\" pid:7760 exited_at:{seconds:1747180081 nanos:895822790}" May 13 23:48:04.804773 containerd[1765]: time="2025-05-13T23:48:04.804714742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"fb45f46756d25b9779831bbd651633fe838460a943078f4801b63e6f1dc02d8f\" pid:7784 exited_at:{seconds:1747180084 nanos:804282902}" May 13 23:48:06.514098 systemd[1]: Started sshd@33-10.200.20.40:22-10.200.16.10:39650.service - OpenSSH per-connection server daemon (10.200.16.10:39650). May 13 23:48:07.004700 sshd[7794]: Accepted publickey for core from 10.200.16.10 port 39650 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:48:07.006019 sshd-session[7794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:07.010272 systemd-logind[1726]: New session 36 of user core. May 13 23:48:07.019354 systemd[1]: Started session-36.scope - Session 36 of User core. May 13 23:48:07.422035 sshd[7796]: Connection closed by 10.200.16.10 port 39650 May 13 23:48:07.422631 sshd-session[7794]: pam_unix(sshd:session): session closed for user core May 13 23:48:07.426432 systemd[1]: sshd@33-10.200.20.40:22-10.200.16.10:39650.service: Deactivated successfully. May 13 23:48:07.428398 systemd[1]: session-36.scope: Deactivated successfully. May 13 23:48:07.429110 systemd-logind[1726]: Session 36 logged out. Waiting for processes to exit. May 13 23:48:07.430249 systemd-logind[1726]: Removed session 36. May 13 23:48:12.502725 systemd[1]: Started sshd@34-10.200.20.40:22-10.200.16.10:33380.service - OpenSSH per-connection server daemon (10.200.16.10:33380). May 13 23:48:12.963250 sshd[7809]: Accepted publickey for core from 10.200.16.10 port 33380 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:48:12.964589 sshd-session[7809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:12.968508 systemd-logind[1726]: New session 37 of user core. May 13 23:48:12.977599 systemd[1]: Started session-37.scope - Session 37 of User core. May 13 23:48:13.368803 sshd[7813]: Connection closed by 10.200.16.10 port 33380 May 13 23:48:13.369313 sshd-session[7809]: pam_unix(sshd:session): session closed for user core May 13 23:48:13.372316 systemd-logind[1726]: Session 37 logged out. Waiting for processes to exit. May 13 23:48:13.374040 systemd[1]: sshd@34-10.200.20.40:22-10.200.16.10:33380.service: Deactivated successfully. May 13 23:48:13.377715 systemd[1]: session-37.scope: Deactivated successfully. May 13 23:48:13.378997 systemd-logind[1726]: Removed session 37. May 13 23:48:18.454537 systemd[1]: Started sshd@35-10.200.20.40:22-10.200.16.10:33384.service - OpenSSH per-connection server daemon (10.200.16.10:33384). May 13 23:48:18.910518 sshd[7833]: Accepted publickey for core from 10.200.16.10 port 33384 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:48:18.911844 sshd-session[7833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:18.915755 systemd-logind[1726]: New session 38 of user core. May 13 23:48:18.923341 systemd[1]: Started session-38.scope - Session 38 of User core. May 13 23:48:19.312317 sshd[7837]: Connection closed by 10.200.16.10 port 33384 May 13 23:48:19.313178 sshd-session[7833]: pam_unix(sshd:session): session closed for user core May 13 23:48:19.316691 systemd[1]: sshd@35-10.200.20.40:22-10.200.16.10:33384.service: Deactivated successfully. May 13 23:48:19.319807 systemd[1]: session-38.scope: Deactivated successfully. May 13 23:48:19.320648 systemd-logind[1726]: Session 38 logged out. Waiting for processes to exit. May 13 23:48:19.321800 systemd-logind[1726]: Removed session 38. May 13 23:48:24.393529 systemd[1]: Started sshd@36-10.200.20.40:22-10.200.16.10:47158.service - OpenSSH per-connection server daemon (10.200.16.10:47158). May 13 23:48:24.850875 sshd[7849]: Accepted publickey for core from 10.200.16.10 port 47158 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:48:24.851997 sshd-session[7849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:24.856531 systemd-logind[1726]: New session 39 of user core. May 13 23:48:24.859370 systemd[1]: Started session-39.scope - Session 39 of User core. May 13 23:48:25.256436 sshd[7851]: Connection closed by 10.200.16.10 port 47158 May 13 23:48:25.257003 sshd-session[7849]: pam_unix(sshd:session): session closed for user core May 13 23:48:25.262361 systemd[1]: sshd@36-10.200.20.40:22-10.200.16.10:47158.service: Deactivated successfully. May 13 23:48:25.264419 systemd[1]: session-39.scope: Deactivated successfully. May 13 23:48:25.265204 systemd-logind[1726]: Session 39 logged out. Waiting for processes to exit. May 13 23:48:25.266268 systemd-logind[1726]: Removed session 39. May 13 23:48:30.345914 systemd[1]: Started sshd@37-10.200.20.40:22-10.200.16.10:58140.service - OpenSSH per-connection server daemon (10.200.16.10:58140). May 13 23:48:30.830394 sshd[7863]: Accepted publickey for core from 10.200.16.10 port 58140 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:48:30.831715 sshd-session[7863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:30.835818 systemd-logind[1726]: New session 40 of user core. May 13 23:48:30.842347 systemd[1]: Started session-40.scope - Session 40 of User core. May 13 23:48:31.232446 sshd[7865]: Connection closed by 10.200.16.10 port 58140 May 13 23:48:31.233014 sshd-session[7863]: pam_unix(sshd:session): session closed for user core May 13 23:48:31.236321 systemd[1]: sshd@37-10.200.20.40:22-10.200.16.10:58140.service: Deactivated successfully. May 13 23:48:31.238185 systemd[1]: session-40.scope: Deactivated successfully. May 13 23:48:31.238988 systemd-logind[1726]: Session 40 logged out. Waiting for processes to exit. May 13 23:48:31.239993 systemd-logind[1726]: Removed session 40. May 13 23:48:31.893864 containerd[1765]: time="2025-05-13T23:48:31.893809130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2def8d6c3682338566055205cbdc3b442549a489e675a80b3478541145023d28\" id:\"263e34f4d08c1fd9d293b3fa404f64ec2092adc377d31ff546a72c2952d28424\" pid:7887 exited_at:{seconds:1747180111 nanos:893386090}" May 13 23:48:34.803910 containerd[1765]: time="2025-05-13T23:48:34.803865916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"42bf10c8e8aa5e55b7815270324d4b2d0ac481682f9c04b4208effe5077227de\" pid:7923 exited_at:{seconds:1747180114 nanos:803576275}" May 13 23:48:34.805282 containerd[1765]: time="2025-05-13T23:48:34.804739997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e20c916c6c3fe3e0c8a2819845efe53b29c6cc9169509f74d3aaeccc4eea449\" id:\"65749d743574230338263d88aadd694435cf4adde322c9c840f49d3662dd9b8d\" pid:7924 exited_at:{seconds:1747180114 nanos:803635436}" May 13 23:48:36.319336 systemd[1]: Started sshd@38-10.200.20.40:22-10.200.16.10:58156.service - OpenSSH per-connection server daemon (10.200.16.10:58156). May 13 23:48:36.779827 sshd[7942]: Accepted publickey for core from 10.200.16.10 port 58156 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:48:36.781526 sshd-session[7942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:36.785874 systemd-logind[1726]: New session 41 of user core. May 13 23:48:36.794368 systemd[1]: Started session-41.scope - Session 41 of User core. May 13 23:48:37.183207 sshd[7944]: Connection closed by 10.200.16.10 port 58156 May 13 23:48:37.183583 sshd-session[7942]: pam_unix(sshd:session): session closed for user core May 13 23:48:37.187098 systemd[1]: sshd@38-10.200.20.40:22-10.200.16.10:58156.service: Deactivated successfully. May 13 23:48:37.189641 systemd[1]: session-41.scope: Deactivated successfully. May 13 23:48:37.190448 systemd-logind[1726]: Session 41 logged out. Waiting for processes to exit. May 13 23:48:37.191670 systemd-logind[1726]: Removed session 41.