May 9 23:59:45.335669 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:59:45.335692 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:59:45.335700 kernel: KASLR enabled May 9 23:59:45.335706 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 9 23:59:45.335713 kernel: printk: bootconsole [pl11] enabled May 9 23:59:45.335719 kernel: efi: EFI v2.7 by EDK II May 9 23:59:45.335726 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 May 9 23:59:45.335732 kernel: random: crng init done May 9 23:59:45.335738 kernel: ACPI: Early table checksum verification disabled May 9 23:59:45.335744 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 9 23:59:45.335750 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335756 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335764 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 9 23:59:45.335770 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335777 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335784 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335790 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335798 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335805 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335811 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 9 23:59:45.335817 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335823 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 9 23:59:45.335830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 9 23:59:45.335836 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 9 23:59:45.335843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 9 23:59:45.335849 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 9 23:59:45.335855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 9 23:59:45.335862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 9 23:59:45.335869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 9 23:59:45.335876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 9 23:59:45.335882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 9 23:59:45.335888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 9 23:59:45.335895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 9 23:59:45.335901 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 9 23:59:45.335907 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] May 9 23:59:45.335913 kernel: Zone ranges: May 9 23:59:45.335920 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 9 23:59:45.335926 kernel: DMA32 empty May 9 23:59:45.335932 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:59:45.335938 kernel: Movable zone start for each node May 9 23:59:45.335949 kernel: Early memory node ranges May 9 23:59:45.335955 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 9 23:59:45.335962 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 9 23:59:45.335969 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 9 23:59:45.335976 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 9 23:59:45.335984 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 9 23:59:45.338695 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 9 23:59:45.338709 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:59:45.338717 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 9 23:59:45.338724 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 9 23:59:45.338731 kernel: psci: probing for conduit method from ACPI. May 9 23:59:45.338738 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:59:45.338745 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:59:45.338751 kernel: psci: MIGRATE_INFO_TYPE not supported. May 9 23:59:45.338758 kernel: psci: SMC Calling Convention v1.4 May 9 23:59:45.338765 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 9 23:59:45.338772 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 9 23:59:45.338784 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:59:45.338791 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:59:45.338798 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:59:45.338805 kernel: Detected PIPT I-cache on CPU0 May 9 23:59:45.338812 kernel: CPU features: detected: GIC system register CPU interface May 9 23:59:45.338819 kernel: CPU features: detected: Hardware dirty bit management May 9 23:59:45.338825 kernel: CPU features: detected: Spectre-BHB May 9 23:59:45.338832 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:59:45.338839 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:59:45.338846 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:59:45.338871 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 9 23:59:45.338890 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:59:45.338897 kernel: alternatives: applying boot alternatives May 9 23:59:45.338908 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:45.338916 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:59:45.338926 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:59:45.338933 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:59:45.338940 kernel: Fallback order for Node 0: 0 May 9 23:59:45.338947 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 9 23:59:45.338954 kernel: Policy zone: Normal May 9 23:59:45.338960 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:59:45.338967 kernel: software IO TLB: area num 2. May 9 23:59:45.338975 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) May 9 23:59:45.338982 kernel: Memory: 3982624K/4194160K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211536K reserved, 0K cma-reserved) May 9 23:59:45.339009 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:59:45.339016 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:59:45.339024 kernel: rcu: RCU event tracing is enabled. May 9 23:59:45.339031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:59:45.339038 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:59:45.339045 kernel: Tracing variant of Tasks RCU enabled. May 9 23:59:45.339051 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:59:45.339058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:59:45.339065 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:59:45.339073 kernel: GICv3: 960 SPIs implemented May 9 23:59:45.339080 kernel: GICv3: 0 Extended SPIs implemented May 9 23:59:45.339087 kernel: Root IRQ handler: gic_handle_irq May 9 23:59:45.339094 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:59:45.339100 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 9 23:59:45.339107 kernel: ITS: No ITS available, not enabling LPIs May 9 23:59:45.339114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:59:45.339121 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:45.339128 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:59:45.339134 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:59:45.339141 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:59:45.339150 kernel: Console: colour dummy device 80x25 May 9 23:59:45.339157 kernel: printk: console [tty1] enabled May 9 23:59:45.339164 kernel: ACPI: Core revision 20230628 May 9 23:59:45.339171 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:59:45.339179 kernel: pid_max: default: 32768 minimum: 301 May 9 23:59:45.339186 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:59:45.339199 kernel: landlock: Up and running. May 9 23:59:45.339207 kernel: SELinux: Initializing. May 9 23:59:45.339214 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:45.339221 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:45.339230 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:45.339237 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:45.339244 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 9 23:59:45.339261 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 9 23:59:45.339268 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 9 23:59:45.339275 kernel: rcu: Hierarchical SRCU implementation. May 9 23:59:45.339282 kernel: rcu: Max phase no-delay instances is 400. May 9 23:59:45.339296 kernel: Remapping and enabling EFI services. May 9 23:59:45.339304 kernel: smp: Bringing up secondary CPUs ... May 9 23:59:45.339311 kernel: Detected PIPT I-cache on CPU1 May 9 23:59:45.339318 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 9 23:59:45.339327 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:45.339334 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:59:45.339342 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:59:45.339349 kernel: SMP: Total of 2 processors activated. May 9 23:59:45.339356 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:59:45.339365 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 9 23:59:45.339373 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:59:45.339381 kernel: CPU features: detected: CRC32 instructions May 9 23:59:45.339388 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:59:45.339395 kernel: CPU features: detected: LSE atomic instructions May 9 23:59:45.339402 kernel: CPU features: detected: Privileged Access Never May 9 23:59:45.339409 kernel: CPU: All CPU(s) started at EL1 May 9 23:59:45.339417 kernel: alternatives: applying system-wide alternatives May 9 23:59:45.339424 kernel: devtmpfs: initialized May 9 23:59:45.339433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:59:45.339440 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:59:45.339447 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:59:45.339454 kernel: SMBIOS 3.1.0 present. May 9 23:59:45.339462 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 9 23:59:45.339469 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:59:45.339477 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:59:45.339484 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:59:45.339491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:59:45.339500 kernel: audit: initializing netlink subsys (disabled) May 9 23:59:45.339507 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 9 23:59:45.339515 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:59:45.339522 kernel: cpuidle: using governor menu May 9 23:59:45.339529 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:59:45.339536 kernel: ASID allocator initialised with 32768 entries May 9 23:59:45.339544 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:59:45.339551 kernel: Serial: AMBA PL011 UART driver May 9 23:59:45.339558 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:59:45.339567 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:59:45.339574 kernel: Modules: 509008 pages in range for PLT usage May 9 23:59:45.339582 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:59:45.339589 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:59:45.339596 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:59:45.339604 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:59:45.339611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:59:45.339618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:59:45.339626 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:59:45.339634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:59:45.339642 kernel: ACPI: Added _OSI(Module Device) May 9 23:59:45.339649 kernel: ACPI: Added _OSI(Processor Device) May 9 23:59:45.339656 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:59:45.339663 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:59:45.339671 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:59:45.339678 kernel: ACPI: Interpreter enabled May 9 23:59:45.339685 kernel: ACPI: Using GIC for interrupt routing May 9 23:59:45.339693 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 9 23:59:45.339702 kernel: printk: console [ttyAMA0] enabled May 9 23:59:45.339709 kernel: printk: bootconsole [pl11] disabled May 9 23:59:45.339716 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 9 23:59:45.339723 kernel: iommu: Default domain type: Translated May 9 23:59:45.339731 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:59:45.339738 kernel: efivars: Registered efivars operations May 9 23:59:45.339745 kernel: vgaarb: loaded May 9 23:59:45.339753 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:59:45.339760 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:59:45.339769 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:59:45.339776 kernel: pnp: PnP ACPI init May 9 23:59:45.339783 kernel: pnp: PnP ACPI: found 0 devices May 9 23:59:45.339790 kernel: NET: Registered PF_INET protocol family May 9 23:59:45.339798 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:59:45.339805 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:59:45.339812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:59:45.339820 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:59:45.339827 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:59:45.339836 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:59:45.339843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:45.339851 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:45.339858 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:59:45.339865 kernel: PCI: CLS 0 bytes, default 64 May 9 23:59:45.339873 kernel: kvm [1]: HYP mode not available May 9 23:59:45.339880 kernel: Initialise system trusted keyrings May 9 23:59:45.339887 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:59:45.339895 kernel: Key type asymmetric registered May 9 23:59:45.339903 kernel: Asymmetric key parser 'x509' registered May 9 23:59:45.339910 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:59:45.339918 kernel: io scheduler mq-deadline registered May 9 23:59:45.339925 kernel: io scheduler kyber registered May 9 23:59:45.339932 kernel: io scheduler bfq registered May 9 23:59:45.339939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:59:45.339946 kernel: thunder_xcv, ver 1.0 May 9 23:59:45.339954 kernel: thunder_bgx, ver 1.0 May 9 23:59:45.339961 kernel: nicpf, ver 1.0 May 9 23:59:45.339968 kernel: nicvf, ver 1.0 May 9 23:59:45.340115 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:59:45.340189 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:59:44 UTC (1746835184) May 9 23:59:45.340200 kernel: efifb: probing for efifb May 9 23:59:45.340230 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 9 23:59:45.340238 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 9 23:59:45.340245 kernel: efifb: scrolling: redraw May 9 23:59:45.340253 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 9 23:59:45.340263 kernel: Console: switching to colour frame buffer device 128x48 May 9 23:59:45.340270 kernel: fb0: EFI VGA frame buffer device May 9 23:59:45.340277 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 9 23:59:45.340285 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:59:45.340292 kernel: No ACPI PMU IRQ for CPU0 May 9 23:59:45.340300 kernel: No ACPI PMU IRQ for CPU1 May 9 23:59:45.340307 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 9 23:59:45.340314 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:59:45.340321 kernel: watchdog: Hard watchdog permanently disabled May 9 23:59:45.340330 kernel: NET: Registered PF_INET6 protocol family May 9 23:59:45.340338 kernel: Segment Routing with IPv6 May 9 23:59:45.340345 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:59:45.340352 kernel: NET: Registered PF_PACKET protocol family May 9 23:59:45.340359 kernel: Key type dns_resolver registered May 9 23:59:45.340367 kernel: registered taskstats version 1 May 9 23:59:45.340374 kernel: Loading compiled-in X.509 certificates May 9 23:59:45.340381 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:59:45.340389 kernel: Key type .fscrypt registered May 9 23:59:45.340397 kernel: Key type fscrypt-provisioning registered May 9 23:59:45.340405 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:59:45.340412 kernel: ima: Allocated hash algorithm: sha1 May 9 23:59:45.340419 kernel: ima: No architecture policies found May 9 23:59:45.340427 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:59:45.340434 kernel: clk: Disabling unused clocks May 9 23:59:45.340441 kernel: Freeing unused kernel memory: 39424K May 9 23:59:45.340448 kernel: Run /init as init process May 9 23:59:45.340455 kernel: with arguments: May 9 23:59:45.340464 kernel: /init May 9 23:59:45.340471 kernel: with environment: May 9 23:59:45.340478 kernel: HOME=/ May 9 23:59:45.340485 kernel: TERM=linux May 9 23:59:45.340492 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:59:45.340502 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:45.340511 systemd[1]: Detected virtualization microsoft. May 9 23:59:45.340519 systemd[1]: Detected architecture arm64. May 9 23:59:45.340529 systemd[1]: Running in initrd. May 9 23:59:45.340536 systemd[1]: No hostname configured, using default hostname. May 9 23:59:45.340544 systemd[1]: Hostname set to . May 9 23:59:45.340552 systemd[1]: Initializing machine ID from random generator. May 9 23:59:45.340560 systemd[1]: Queued start job for default target initrd.target. May 9 23:59:45.340567 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:45.340575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:45.340584 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:59:45.340593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:45.340601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:59:45.340609 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:59:45.340618 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:59:45.340626 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:59:45.340634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:45.340644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:45.340651 systemd[1]: Reached target paths.target - Path Units. May 9 23:59:45.340659 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:45.340667 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:45.340675 systemd[1]: Reached target timers.target - Timer Units. May 9 23:59:45.340683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:45.340690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:45.340698 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:59:45.340706 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:59:45.340716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:45.340724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:45.340731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:45.340739 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:59:45.340747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:59:45.340755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:45.340763 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:59:45.340771 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:59:45.340779 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:45.340788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:45.340815 systemd-journald[217]: Collecting audit messages is disabled. May 9 23:59:45.340834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:45.340843 systemd-journald[217]: Journal started May 9 23:59:45.340863 systemd-journald[217]: Runtime Journal (/run/log/journal/6760a407007a4f41a57ef1706b6e6a87) is 8.0M, max 78.5M, 70.5M free. May 9 23:59:45.339236 systemd-modules-load[218]: Inserted module 'overlay' May 9 23:59:45.375376 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:59:45.375400 kernel: Bridge firewalling registered May 9 23:59:45.375416 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:45.364824 systemd-modules-load[218]: Inserted module 'br_netfilter' May 9 23:59:45.381087 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:59:45.394956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:45.409694 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:59:45.420709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:45.437323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:45.455133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:45.474164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:45.491168 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:59:45.511191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:45.522981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:45.535574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:45.548743 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:45.564204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:59:45.584190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:45.605087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:45.620792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:45.636649 dracut-cmdline[248]: dracut-dracut-053 May 9 23:59:45.636649 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:45.681209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:45.707038 kernel: SCSI subsystem initialized May 9 23:59:45.707067 kernel: Loading iSCSI transport class v2.0-870. May 9 23:59:45.714014 kernel: iscsi: registered transport (tcp) May 9 23:59:45.722658 systemd-resolved[326]: Positive Trust Anchors: May 9 23:59:45.740656 kernel: iscsi: registered transport (qla4xxx) May 9 23:59:45.740680 kernel: QLogic iSCSI HBA Driver May 9 23:59:45.726861 systemd-resolved[326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:59:45.726896 systemd-resolved[326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:59:45.731586 systemd-resolved[326]: Defaulting to hostname 'linux'. May 9 23:59:45.732395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:59:45.749417 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:45.778029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:59:45.820256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:59:45.864163 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:59:45.864230 kernel: device-mapper: uevent: version 1.0.3 May 9 23:59:45.870563 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:59:45.928020 kernel: raid6: neonx8 gen() 15792 MB/s May 9 23:59:45.940001 kernel: raid6: neonx4 gen() 15637 MB/s May 9 23:59:45.960016 kernel: raid6: neonx2 gen() 13243 MB/s May 9 23:59:45.981002 kernel: raid6: neonx1 gen() 10470 MB/s May 9 23:59:46.001000 kernel: raid6: int64x8 gen() 6959 MB/s May 9 23:59:46.021006 kernel: raid6: int64x4 gen() 7335 MB/s May 9 23:59:46.042005 kernel: raid6: int64x2 gen() 6130 MB/s May 9 23:59:46.065659 kernel: raid6: int64x1 gen() 5062 MB/s May 9 23:59:46.065680 kernel: raid6: using algorithm neonx8 gen() 15792 MB/s May 9 23:59:46.089250 kernel: raid6: .... xor() 11932 MB/s, rmw enabled May 9 23:59:46.089271 kernel: raid6: using neon recovery algorithm May 9 23:59:46.101862 kernel: xor: measuring software checksum speed May 9 23:59:46.101884 kernel: 8regs : 19731 MB/sec May 9 23:59:46.105459 kernel: 32regs : 19627 MB/sec May 9 23:59:46.108840 kernel: arm64_neon : 27007 MB/sec May 9 23:59:46.113110 kernel: xor: using function: arm64_neon (27007 MB/sec) May 9 23:59:46.164017 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:59:46.174542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:46.190131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:46.213309 systemd-udevd[437]: Using default interface naming scheme 'v255'. May 9 23:59:46.218703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:46.236111 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:59:46.259062 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation May 9 23:59:46.285904 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:46.303461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:46.340451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:46.358147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:59:46.390242 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:59:46.407684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:46.424659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:46.437841 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:46.454173 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:59:46.470113 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:46.487175 kernel: hv_vmbus: Vmbus version:5.3 May 9 23:59:46.470325 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:46.494944 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:46.514537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:46.538778 kernel: pps_core: LinuxPPS API ver. 1 registered May 9 23:59:46.538804 kernel: hv_vmbus: registering driver hv_netvsc May 9 23:59:46.538814 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 9 23:59:46.538824 kernel: hv_vmbus: registering driver hyperv_keyboard May 9 23:59:46.514797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.554686 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.586386 kernel: hv_vmbus: registering driver hid_hyperv May 9 23:59:46.586408 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 9 23:59:46.586419 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 9 23:59:46.594371 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 9 23:59:46.600482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.621843 kernel: PTP clock support registered May 9 23:59:46.621864 kernel: hv_vmbus: registering driver hv_storvsc May 9 23:59:46.621874 kernel: scsi host0: storvsc_host_t May 9 23:59:46.622041 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 9 23:59:46.632277 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:46.644102 kernel: scsi host1: storvsc_host_t May 9 23:59:46.646475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.666197 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: VF slot 1 added May 9 23:59:46.658604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:46.692844 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 9 23:59:46.658661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.680353 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.706165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.731081 kernel: hv_utils: Registering HyperV Utility Driver May 9 23:59:46.731132 kernel: hv_vmbus: registering driver hv_utils May 9 23:59:46.750551 kernel: hv_utils: Heartbeat IC version 3.0 May 9 23:59:46.750599 kernel: hv_utils: Shutdown IC version 3.2 May 9 23:59:46.750610 kernel: hv_vmbus: registering driver hv_pci May 9 23:59:46.750620 kernel: hv_utils: TimeSync IC version 4.0 May 9 23:59:46.772731 systemd-resolved[326]: Clock change detected. Flushing caches. May 9 23:59:46.790214 kernel: hv_pci f40c62b7-2690-4fd4-93b5-47ae0823cd3d: PCI VMBus probing: Using version 0x10004 May 9 23:59:46.790378 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 9 23:59:46.779513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.819651 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 23:59:46.819673 kernel: hv_pci f40c62b7-2690-4fd4-93b5-47ae0823cd3d: PCI host bridge to bus 2690:00 May 9 23:59:46.819826 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 9 23:59:46.817897 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:46.848053 kernel: pci_bus 2690:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 9 23:59:46.848271 kernel: pci_bus 2690:00: No busn resource found for root bus, will use [bus 00-ff] May 9 23:59:46.865969 kernel: pci 2690:00:02.0: [15b3:1018] type 00 class 0x020000 May 9 23:59:46.866044 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 9 23:59:46.866207 kernel: pci 2690:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:59:46.875598 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 9 23:59:46.875815 kernel: pci 2690:00:02.0: enabling Extended Tags May 9 23:59:46.884232 kernel: sd 0:0:0:0: [sda] Write Protect is off May 9 23:59:46.884758 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 9 23:59:46.904743 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 9 23:59:46.904930 kernel: pci 2690:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2690:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 9 23:59:46.905862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:46.942592 kernel: pci_bus 2690:00: busn_res: [bus 00-ff] end is updated to 00 May 9 23:59:46.943029 kernel: pci 2690:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:59:46.943211 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:46.943223 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 9 23:59:46.985294 kernel: mlx5_core 2690:00:02.0: enabling device (0000 -> 0002) May 9 23:59:46.991736 kernel: mlx5_core 2690:00:02.0: firmware version: 16.30.1284 May 9 23:59:47.193702 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: VF registering: eth1 May 9 23:59:47.193939 kernel: mlx5_core 2690:00:02.0 eth1: joined to eth0 May 9 23:59:47.200979 kernel: mlx5_core 2690:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 9 23:59:47.212745 kernel: mlx5_core 2690:00:02.0 enP9872s1: renamed from eth1 May 9 23:59:47.890993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 9 23:59:47.929745 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (496) May 9 23:59:47.943583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 9 23:59:48.019573 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 9 23:59:48.060762 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (487) May 9 23:59:48.075714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 9 23:59:48.083198 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 9 23:59:48.112058 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:59:48.138748 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:48.147750 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:49.158603 disk-uuid[603]: The operation has completed successfully. May 9 23:59:49.164082 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:49.221145 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:59:49.221244 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:59:49.257974 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:59:49.271856 sh[689]: Success May 9 23:59:49.293802 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:59:49.603610 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:59:49.627256 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:59:49.633240 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:59:49.670743 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:59:49.670803 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:49.679028 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:59:49.684314 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:59:49.688482 kernel: BTRFS info (device dm-0): using free space tree May 9 23:59:50.339516 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:59:50.345756 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:59:50.370025 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:59:50.381955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:59:50.416564 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:50.416593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:50.416603 kernel: BTRFS info (device sda6): using free space tree May 9 23:59:50.451761 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:59:50.467990 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:59:50.473799 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:50.479922 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:59:50.496001 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:59:50.523436 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:50.542894 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:50.571089 systemd-networkd[873]: lo: Link UP May 9 23:59:50.571103 systemd-networkd[873]: lo: Gained carrier May 9 23:59:50.572690 systemd-networkd[873]: Enumeration completed May 9 23:59:50.572812 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:59:50.577813 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:50.577816 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:59:50.586124 systemd[1]: Reached target network.target - Network. May 9 23:59:50.673752 kernel: mlx5_core 2690:00:02.0 enP9872s1: Link up May 9 23:59:50.716834 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: Data path switched to VF: enP9872s1 May 9 23:59:50.717041 systemd-networkd[873]: enP9872s1: Link UP May 9 23:59:50.717152 systemd-networkd[873]: eth0: Link UP May 9 23:59:50.717252 systemd-networkd[873]: eth0: Gained carrier May 9 23:59:50.717261 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:50.735030 systemd-networkd[873]: enP9872s1: Gained carrier May 9 23:59:50.752792 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 9 23:59:51.939219 ignition[842]: Ignition 2.19.0 May 9 23:59:51.939234 ignition[842]: Stage: fetch-offline May 9 23:59:51.945014 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:51.939278 ignition[842]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.939287 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:51.939381 ignition[842]: parsed url from cmdline: "" May 9 23:59:51.939383 ignition[842]: no config URL provided May 9 23:59:51.939388 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:59:51.974021 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:59:51.939395 ignition[842]: no config at "/usr/lib/ignition/user.ign" May 9 23:59:51.939400 ignition[842]: failed to fetch config: resource requires networking May 9 23:59:51.939771 ignition[842]: Ignition finished successfully May 9 23:59:51.995228 ignition[884]: Ignition 2.19.0 May 9 23:59:51.995235 ignition[884]: Stage: fetch May 9 23:59:51.995485 ignition[884]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.995498 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:51.995615 ignition[884]: parsed url from cmdline: "" May 9 23:59:51.995618 ignition[884]: no config URL provided May 9 23:59:51.995623 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:59:51.995630 ignition[884]: no config at "/usr/lib/ignition/user.ign" May 9 23:59:51.995659 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 9 23:59:52.122613 ignition[884]: GET result: OK May 9 23:59:52.122701 ignition[884]: config has been read from IMDS userdata May 9 23:59:52.122774 ignition[884]: parsing config with SHA512: 6eebbf69e011fa43ebc1647f13b82d895dbed2435bad1b1243fed51f9225f14c76c4b08d51b551cf0ad289cebe32f6a8f6933ffea32d21b43e317912cb01396c May 9 23:59:52.127464 unknown[884]: fetched base config from "system" May 9 23:59:52.127969 ignition[884]: fetch: fetch complete May 9 23:59:52.127474 unknown[884]: fetched base config from "system" May 9 23:59:52.127975 ignition[884]: fetch: fetch passed May 9 23:59:52.127479 unknown[884]: fetched user config from "azure" May 9 23:59:52.128024 ignition[884]: Ignition finished successfully May 9 23:59:52.133692 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:59:52.180420 ignition[890]: Ignition 2.19.0 May 9 23:59:52.153122 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:59:52.180431 ignition[890]: Stage: kargs May 9 23:59:52.188304 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:59:52.180634 ignition[890]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:52.196789 systemd-networkd[873]: enP9872s1: Gained IPv6LL May 9 23:59:52.180644 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:52.214010 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:59:52.181677 ignition[890]: kargs: kargs passed May 9 23:59:52.240679 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:59:52.181753 ignition[890]: Ignition finished successfully May 9 23:59:52.248465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:59:52.237260 ignition[897]: Ignition 2.19.0 May 9 23:59:52.260297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:59:52.237267 ignition[897]: Stage: disks May 9 23:59:52.273463 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:52.237487 ignition[897]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:52.283615 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:59:52.237498 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:52.296690 systemd[1]: Reached target basic.target - Basic System. May 9 23:59:52.238882 ignition[897]: disks: disks passed May 9 23:59:52.321992 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:59:52.238940 ignition[897]: Ignition finished successfully May 9 23:59:52.433671 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 9 23:59:52.441625 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:59:52.449086 systemd-networkd[873]: eth0: Gained IPv6LL May 9 23:59:52.458986 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:59:52.518756 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:59:52.520085 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:59:52.526023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:59:52.597867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:52.609648 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:59:52.620931 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 9 23:59:52.643845 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (916) May 9 23:59:52.637669 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:59:52.677246 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:52.677272 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:52.677283 kernel: BTRFS info (device sda6): using free space tree May 9 23:59:52.637707 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:52.683215 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:59:52.705742 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:59:52.711015 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:59:52.724135 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:53.611056 coreos-metadata[918]: May 09 23:59:53.611 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 9 23:59:53.619602 coreos-metadata[918]: May 09 23:59:53.619 INFO Fetch successful May 9 23:59:53.619602 coreos-metadata[918]: May 09 23:59:53.619 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 9 23:59:53.637542 coreos-metadata[918]: May 09 23:59:53.637 INFO Fetch successful May 9 23:59:53.637542 coreos-metadata[918]: May 09 23:59:53.637 INFO wrote hostname ci-4081.3.3-n-4cc30cd86c to /sysroot/etc/hostname May 9 23:59:53.644839 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:59:54.002062 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:59:54.035668 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory May 9 23:59:54.046554 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:59:54.057653 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:59:56.002679 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:59:56.021005 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:59:56.028532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:59:56.052493 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:59:56.060761 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:56.078762 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:59:56.093461 ignition[1038]: INFO : Ignition 2.19.0 May 9 23:59:56.093461 ignition[1038]: INFO : Stage: mount May 9 23:59:56.102467 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:56.102467 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:56.102467 ignition[1038]: INFO : mount: mount passed May 9 23:59:56.102467 ignition[1038]: INFO : Ignition finished successfully May 9 23:59:56.103533 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:59:56.128891 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:59:56.142935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:56.172747 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) May 9 23:59:56.186610 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:56.186659 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:56.190974 kernel: BTRFS info (device sda6): using free space tree May 9 23:59:56.197753 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:59:56.199168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:56.222021 ignition[1066]: INFO : Ignition 2.19.0 May 9 23:59:56.222021 ignition[1066]: INFO : Stage: files May 9 23:59:56.229938 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:56.229938 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:56.229938 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping May 9 23:59:56.247741 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:59:56.247741 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:59:56.542208 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:59:56.550439 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:59:56.550439 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:59:56.543149 unknown[1066]: wrote ssh authorized keys file for user: core May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 23:59:56.613521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 23:59:56.984111 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:59:57.184126 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:57.184126 ignition[1066]: INFO : files: op(c): [started] processing unit "containerd.service" May 9 23:59:57.241188 ignition[1066]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(c): [finished] processing unit "containerd.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:57.255088 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:57.255088 ignition[1066]: INFO : files: files passed May 9 23:59:57.255088 ignition[1066]: INFO : Ignition finished successfully May 9 23:59:57.256148 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:59:57.302027 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:59:57.322916 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:59:57.409064 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:57.409064 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:57.334494 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:59:57.444117 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:57.334660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:59:57.409813 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:57.424444 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:59:57.459942 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:59:57.494059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:59:57.494191 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:59:57.506850 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:59:57.520210 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:59:57.531902 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:59:57.547975 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:59:57.568439 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:57.587001 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:59:57.607529 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:59:57.607657 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:59:57.620145 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:57.632869 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:57.645651 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:59:57.656779 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:59:57.656860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:57.673044 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:59:57.684873 systemd[1]: Stopped target basic.target - Basic System. May 9 23:59:57.695505 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:59:57.706103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:57.718072 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:59:57.730871 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:59:57.742268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:57.754432 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:59:57.766533 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:59:57.777925 systemd[1]: Stopped target swap.target - Swaps. May 9 23:59:57.787805 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:59:57.787890 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:57.803513 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:57.809928 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:57.822321 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:59:57.822381 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:57.835003 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:59:57.835072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:59:57.852252 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:59:57.852322 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:57.866277 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:59:57.866333 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:59:57.876988 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 9 23:59:57.877027 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:59:57.941688 ignition[1119]: INFO : Ignition 2.19.0 May 9 23:59:57.941688 ignition[1119]: INFO : Stage: umount May 9 23:59:57.941688 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:57.941688 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:57.941688 ignition[1119]: INFO : umount: umount passed May 9 23:59:57.941688 ignition[1119]: INFO : Ignition finished successfully May 9 23:59:57.909918 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:59:57.926862 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:59:57.926958 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:57.943930 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:59:57.949487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:59:57.949556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:57.956305 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:59:57.956356 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:57.978983 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:59:57.979461 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:59:57.979585 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:59:57.985749 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:59:57.985810 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:59:58.005418 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:59:58.005497 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:59:58.012088 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:59:58.012142 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:59:58.022374 systemd[1]: Stopped target network.target - Network. May 9 23:59:58.033219 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:59:58.033283 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:58.040652 systemd[1]: Stopped target paths.target - Path Units. May 9 23:59:58.050530 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:59:58.056245 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:58.063376 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:59:58.074844 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:59:58.084999 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:59:58.085048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:58.096304 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:59:58.096345 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:58.107866 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:59:58.107920 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:59:58.118447 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:59:58.118490 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:59:58.130156 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:59:58.136347 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:59:58.392612 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: Data path switched from VF: enP9872s1 May 9 23:59:58.158801 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:59:58.158964 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:59:58.163280 systemd-networkd[873]: eth0: DHCPv6 lease lost May 9 23:59:58.172833 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:59:58.172956 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:59:58.181937 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:59:58.182160 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:58.210912 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:59:58.220575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:59:58.220645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:58.232461 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:58.232511 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:58.243944 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:59:58.243993 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:59:58.255946 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:59:58.255989 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:58.268308 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:58.314438 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:59:58.314606 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:58.327258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:59:58.327306 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:59:58.338302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:59:58.338349 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:58.349847 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:59:58.349899 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:58.366486 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:59:58.366530 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:59:58.386797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:58.386898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:58.419950 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:59:58.436273 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:59:58.436347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:58.448243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:58.448290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:58.461045 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:59:58.461161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:59:58.472222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:59:58.472313 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:59:58.485293 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:59:58.485404 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:59:58.499569 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:59:58.499670 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:59:58.510764 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:59:58.540983 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:59:58.671838 systemd[1]: Switching root. May 9 23:59:58.731164 systemd-journald[217]: Journal stopped May 9 23:59:45.335669 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:59:45.335692 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:59:45.335700 kernel: KASLR enabled May 9 23:59:45.335706 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 9 23:59:45.335713 kernel: printk: bootconsole [pl11] enabled May 9 23:59:45.335719 kernel: efi: EFI v2.7 by EDK II May 9 23:59:45.335726 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 May 9 23:59:45.335732 kernel: random: crng init done May 9 23:59:45.335738 kernel: ACPI: Early table checksum verification disabled May 9 23:59:45.335744 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 9 23:59:45.335750 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335756 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335764 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 9 23:59:45.335770 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335777 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335784 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335790 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335798 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335805 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335811 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 9 23:59:45.335817 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 9 23:59:45.335823 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 9 23:59:45.335830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 9 23:59:45.335836 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 9 23:59:45.335843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 9 23:59:45.335849 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 9 23:59:45.335855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 9 23:59:45.335862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 9 23:59:45.335869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 9 23:59:45.335876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 9 23:59:45.335882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 9 23:59:45.335888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 9 23:59:45.335895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 9 23:59:45.335901 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 9 23:59:45.335907 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] May 9 23:59:45.335913 kernel: Zone ranges: May 9 23:59:45.335920 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 9 23:59:45.335926 kernel: DMA32 empty May 9 23:59:45.335932 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:59:45.335938 kernel: Movable zone start for each node May 9 23:59:45.335949 kernel: Early memory node ranges May 9 23:59:45.335955 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 9 23:59:45.335962 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 9 23:59:45.335969 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 9 23:59:45.335976 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 9 23:59:45.335984 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 9 23:59:45.338695 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 9 23:59:45.338709 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 9 23:59:45.338717 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 9 23:59:45.338724 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 9 23:59:45.338731 kernel: psci: probing for conduit method from ACPI. May 9 23:59:45.338738 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:59:45.338745 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:59:45.338751 kernel: psci: MIGRATE_INFO_TYPE not supported. May 9 23:59:45.338758 kernel: psci: SMC Calling Convention v1.4 May 9 23:59:45.338765 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 9 23:59:45.338772 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 9 23:59:45.338784 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:59:45.338791 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:59:45.338798 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:59:45.338805 kernel: Detected PIPT I-cache on CPU0 May 9 23:59:45.338812 kernel: CPU features: detected: GIC system register CPU interface May 9 23:59:45.338819 kernel: CPU features: detected: Hardware dirty bit management May 9 23:59:45.338825 kernel: CPU features: detected: Spectre-BHB May 9 23:59:45.338832 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:59:45.338839 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:59:45.338846 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:59:45.338871 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 9 23:59:45.338890 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:59:45.338897 kernel: alternatives: applying boot alternatives May 9 23:59:45.338908 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:45.338916 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:59:45.338926 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:59:45.338933 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:59:45.338940 kernel: Fallback order for Node 0: 0 May 9 23:59:45.338947 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 9 23:59:45.338954 kernel: Policy zone: Normal May 9 23:59:45.338960 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:59:45.338967 kernel: software IO TLB: area num 2. May 9 23:59:45.338975 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) May 9 23:59:45.338982 kernel: Memory: 3982624K/4194160K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211536K reserved, 0K cma-reserved) May 9 23:59:45.339009 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:59:45.339016 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:59:45.339024 kernel: rcu: RCU event tracing is enabled. May 9 23:59:45.339031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:59:45.339038 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:59:45.339045 kernel: Tracing variant of Tasks RCU enabled. May 9 23:59:45.339051 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:59:45.339058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:59:45.339065 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:59:45.339073 kernel: GICv3: 960 SPIs implemented May 9 23:59:45.339080 kernel: GICv3: 0 Extended SPIs implemented May 9 23:59:45.339087 kernel: Root IRQ handler: gic_handle_irq May 9 23:59:45.339094 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:59:45.339100 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 9 23:59:45.339107 kernel: ITS: No ITS available, not enabling LPIs May 9 23:59:45.339114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:59:45.339121 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:45.339128 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:59:45.339134 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:59:45.339141 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:59:45.339150 kernel: Console: colour dummy device 80x25 May 9 23:59:45.339157 kernel: printk: console [tty1] enabled May 9 23:59:45.339164 kernel: ACPI: Core revision 20230628 May 9 23:59:45.339171 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:59:45.339179 kernel: pid_max: default: 32768 minimum: 301 May 9 23:59:45.339186 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:59:45.339199 kernel: landlock: Up and running. May 9 23:59:45.339207 kernel: SELinux: Initializing. May 9 23:59:45.339214 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:45.339221 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:45.339230 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:45.339237 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:45.339244 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 9 23:59:45.339261 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 9 23:59:45.339268 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 9 23:59:45.339275 kernel: rcu: Hierarchical SRCU implementation. May 9 23:59:45.339282 kernel: rcu: Max phase no-delay instances is 400. May 9 23:59:45.339296 kernel: Remapping and enabling EFI services. May 9 23:59:45.339304 kernel: smp: Bringing up secondary CPUs ... May 9 23:59:45.339311 kernel: Detected PIPT I-cache on CPU1 May 9 23:59:45.339318 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 9 23:59:45.339327 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:45.339334 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:59:45.339342 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:59:45.339349 kernel: SMP: Total of 2 processors activated. May 9 23:59:45.339356 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:59:45.339365 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 9 23:59:45.339373 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:59:45.339381 kernel: CPU features: detected: CRC32 instructions May 9 23:59:45.339388 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:59:45.339395 kernel: CPU features: detected: LSE atomic instructions May 9 23:59:45.339402 kernel: CPU features: detected: Privileged Access Never May 9 23:59:45.339409 kernel: CPU: All CPU(s) started at EL1 May 9 23:59:45.339417 kernel: alternatives: applying system-wide alternatives May 9 23:59:45.339424 kernel: devtmpfs: initialized May 9 23:59:45.339433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:59:45.339440 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:59:45.339447 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:59:45.339454 kernel: SMBIOS 3.1.0 present. May 9 23:59:45.339462 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 9 23:59:45.339469 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:59:45.339477 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:59:45.339484 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:59:45.339491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:59:45.339500 kernel: audit: initializing netlink subsys (disabled) May 9 23:59:45.339507 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 9 23:59:45.339515 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:59:45.339522 kernel: cpuidle: using governor menu May 9 23:59:45.339529 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:59:45.339536 kernel: ASID allocator initialised with 32768 entries May 9 23:59:45.339544 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:59:45.339551 kernel: Serial: AMBA PL011 UART driver May 9 23:59:45.339558 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:59:45.339567 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:59:45.339574 kernel: Modules: 509008 pages in range for PLT usage May 9 23:59:45.339582 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:59:45.339589 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:59:45.339596 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:59:45.339604 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:59:45.339611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:59:45.339618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:59:45.339626 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:59:45.339634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:59:45.339642 kernel: ACPI: Added _OSI(Module Device) May 9 23:59:45.339649 kernel: ACPI: Added _OSI(Processor Device) May 9 23:59:45.339656 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:59:45.339663 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:59:45.339671 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:59:45.339678 kernel: ACPI: Interpreter enabled May 9 23:59:45.339685 kernel: ACPI: Using GIC for interrupt routing May 9 23:59:45.339693 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 9 23:59:45.339702 kernel: printk: console [ttyAMA0] enabled May 9 23:59:45.339709 kernel: printk: bootconsole [pl11] disabled May 9 23:59:45.339716 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 9 23:59:45.339723 kernel: iommu: Default domain type: Translated May 9 23:59:45.339731 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:59:45.339738 kernel: efivars: Registered efivars operations May 9 23:59:45.339745 kernel: vgaarb: loaded May 9 23:59:45.339753 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:59:45.339760 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:59:45.339769 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:59:45.339776 kernel: pnp: PnP ACPI init May 9 23:59:45.339783 kernel: pnp: PnP ACPI: found 0 devices May 9 23:59:45.339790 kernel: NET: Registered PF_INET protocol family May 9 23:59:45.339798 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:59:45.339805 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:59:45.339812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:59:45.339820 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:59:45.339827 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:59:45.339836 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:59:45.339843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:45.339851 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:45.339858 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:59:45.339865 kernel: PCI: CLS 0 bytes, default 64 May 9 23:59:45.339873 kernel: kvm [1]: HYP mode not available May 9 23:59:45.339880 kernel: Initialise system trusted keyrings May 9 23:59:45.339887 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:59:45.339895 kernel: Key type asymmetric registered May 9 23:59:45.339903 kernel: Asymmetric key parser 'x509' registered May 9 23:59:45.339910 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:59:45.339918 kernel: io scheduler mq-deadline registered May 9 23:59:45.339925 kernel: io scheduler kyber registered May 9 23:59:45.339932 kernel: io scheduler bfq registered May 9 23:59:45.339939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:59:45.339946 kernel: thunder_xcv, ver 1.0 May 9 23:59:45.339954 kernel: thunder_bgx, ver 1.0 May 9 23:59:45.339961 kernel: nicpf, ver 1.0 May 9 23:59:45.339968 kernel: nicvf, ver 1.0 May 9 23:59:45.340115 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:59:45.340189 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:59:44 UTC (1746835184) May 9 23:59:45.340200 kernel: efifb: probing for efifb May 9 23:59:45.340230 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 9 23:59:45.340238 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 9 23:59:45.340245 kernel: efifb: scrolling: redraw May 9 23:59:45.340253 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 9 23:59:45.340263 kernel: Console: switching to colour frame buffer device 128x48 May 9 23:59:45.340270 kernel: fb0: EFI VGA frame buffer device May 9 23:59:45.340277 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 9 23:59:45.340285 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:59:45.340292 kernel: No ACPI PMU IRQ for CPU0 May 9 23:59:45.340300 kernel: No ACPI PMU IRQ for CPU1 May 9 23:59:45.340307 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 9 23:59:45.340314 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:59:45.340321 kernel: watchdog: Hard watchdog permanently disabled May 9 23:59:45.340330 kernel: NET: Registered PF_INET6 protocol family May 9 23:59:45.340338 kernel: Segment Routing with IPv6 May 9 23:59:45.340345 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:59:45.340352 kernel: NET: Registered PF_PACKET protocol family May 9 23:59:45.340359 kernel: Key type dns_resolver registered May 9 23:59:45.340367 kernel: registered taskstats version 1 May 9 23:59:45.340374 kernel: Loading compiled-in X.509 certificates May 9 23:59:45.340381 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:59:45.340389 kernel: Key type .fscrypt registered May 9 23:59:45.340397 kernel: Key type fscrypt-provisioning registered May 9 23:59:45.340405 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:59:45.340412 kernel: ima: Allocated hash algorithm: sha1 May 9 23:59:45.340419 kernel: ima: No architecture policies found May 9 23:59:45.340427 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:59:45.340434 kernel: clk: Disabling unused clocks May 9 23:59:45.340441 kernel: Freeing unused kernel memory: 39424K May 9 23:59:45.340448 kernel: Run /init as init process May 9 23:59:45.340455 kernel: with arguments: May 9 23:59:45.340464 kernel: /init May 9 23:59:45.340471 kernel: with environment: May 9 23:59:45.340478 kernel: HOME=/ May 9 23:59:45.340485 kernel: TERM=linux May 9 23:59:45.340492 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:59:45.340502 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:45.340511 systemd[1]: Detected virtualization microsoft. May 9 23:59:45.340519 systemd[1]: Detected architecture arm64. May 9 23:59:45.340529 systemd[1]: Running in initrd. May 9 23:59:45.340536 systemd[1]: No hostname configured, using default hostname. May 9 23:59:45.340544 systemd[1]: Hostname set to . May 9 23:59:45.340552 systemd[1]: Initializing machine ID from random generator. May 9 23:59:45.340560 systemd[1]: Queued start job for default target initrd.target. May 9 23:59:45.340567 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:45.340575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:45.340584 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:59:45.340593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:45.340601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:59:45.340609 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:59:45.340618 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:59:45.340626 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:59:45.340634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:45.340644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:45.340651 systemd[1]: Reached target paths.target - Path Units. May 9 23:59:45.340659 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:45.340667 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:45.340675 systemd[1]: Reached target timers.target - Timer Units. May 9 23:59:45.340683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:45.340690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:45.340698 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:59:45.340706 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:59:45.340716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:45.340724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:45.340731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:45.340739 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:59:45.340747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:59:45.340755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:45.340763 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:59:45.340771 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:59:45.340779 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:45.340788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:45.340815 systemd-journald[217]: Collecting audit messages is disabled. May 9 23:59:45.340834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:45.340843 systemd-journald[217]: Journal started May 9 23:59:45.340863 systemd-journald[217]: Runtime Journal (/run/log/journal/6760a407007a4f41a57ef1706b6e6a87) is 8.0M, max 78.5M, 70.5M free. May 9 23:59:45.339236 systemd-modules-load[218]: Inserted module 'overlay' May 9 23:59:45.375376 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:59:45.375400 kernel: Bridge firewalling registered May 9 23:59:45.375416 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:45.364824 systemd-modules-load[218]: Inserted module 'br_netfilter' May 9 23:59:45.381087 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:59:45.394956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:45.409694 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:59:45.420709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:45.437323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:45.455133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:45.474164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:45.491168 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:59:45.511191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:45.522981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:45.535574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:45.548743 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:45.564204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:59:45.584190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:45.605087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:45.620792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:45.636649 dracut-cmdline[248]: dracut-dracut-053 May 9 23:59:45.636649 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:45.681209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:45.707038 kernel: SCSI subsystem initialized May 9 23:59:45.707067 kernel: Loading iSCSI transport class v2.0-870. May 9 23:59:45.714014 kernel: iscsi: registered transport (tcp) May 9 23:59:45.722658 systemd-resolved[326]: Positive Trust Anchors: May 9 23:59:45.740656 kernel: iscsi: registered transport (qla4xxx) May 9 23:59:45.740680 kernel: QLogic iSCSI HBA Driver May 9 23:59:45.726861 systemd-resolved[326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:59:45.726896 systemd-resolved[326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:59:45.731586 systemd-resolved[326]: Defaulting to hostname 'linux'. May 9 23:59:45.732395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:59:45.749417 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:45.778029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:59:45.820256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:59:45.864163 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:59:45.864230 kernel: device-mapper: uevent: version 1.0.3 May 9 23:59:45.870563 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:59:45.928020 kernel: raid6: neonx8 gen() 15792 MB/s May 9 23:59:45.940001 kernel: raid6: neonx4 gen() 15637 MB/s May 9 23:59:45.960016 kernel: raid6: neonx2 gen() 13243 MB/s May 9 23:59:45.981002 kernel: raid6: neonx1 gen() 10470 MB/s May 9 23:59:46.001000 kernel: raid6: int64x8 gen() 6959 MB/s May 9 23:59:46.021006 kernel: raid6: int64x4 gen() 7335 MB/s May 9 23:59:46.042005 kernel: raid6: int64x2 gen() 6130 MB/s May 9 23:59:46.065659 kernel: raid6: int64x1 gen() 5062 MB/s May 9 23:59:46.065680 kernel: raid6: using algorithm neonx8 gen() 15792 MB/s May 9 23:59:46.089250 kernel: raid6: .... xor() 11932 MB/s, rmw enabled May 9 23:59:46.089271 kernel: raid6: using neon recovery algorithm May 9 23:59:46.101862 kernel: xor: measuring software checksum speed May 9 23:59:46.101884 kernel: 8regs : 19731 MB/sec May 9 23:59:46.105459 kernel: 32regs : 19627 MB/sec May 9 23:59:46.108840 kernel: arm64_neon : 27007 MB/sec May 9 23:59:46.113110 kernel: xor: using function: arm64_neon (27007 MB/sec) May 9 23:59:46.164017 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:59:46.174542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:46.190131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:46.213309 systemd-udevd[437]: Using default interface naming scheme 'v255'. May 9 23:59:46.218703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:46.236111 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:59:46.259062 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation May 9 23:59:46.285904 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:46.303461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:46.340451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:46.358147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:59:46.390242 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:59:46.407684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:46.424659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:46.437841 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:46.454173 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:59:46.470113 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:46.487175 kernel: hv_vmbus: Vmbus version:5.3 May 9 23:59:46.470325 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:46.494944 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:46.514537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:46.538778 kernel: pps_core: LinuxPPS API ver. 1 registered May 9 23:59:46.538804 kernel: hv_vmbus: registering driver hv_netvsc May 9 23:59:46.538814 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 9 23:59:46.538824 kernel: hv_vmbus: registering driver hyperv_keyboard May 9 23:59:46.514797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.554686 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.586386 kernel: hv_vmbus: registering driver hid_hyperv May 9 23:59:46.586408 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 9 23:59:46.586419 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 9 23:59:46.594371 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 9 23:59:46.600482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.621843 kernel: PTP clock support registered May 9 23:59:46.621864 kernel: hv_vmbus: registering driver hv_storvsc May 9 23:59:46.621874 kernel: scsi host0: storvsc_host_t May 9 23:59:46.622041 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 9 23:59:46.632277 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:46.644102 kernel: scsi host1: storvsc_host_t May 9 23:59:46.646475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.666197 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: VF slot 1 added May 9 23:59:46.658604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:46.692844 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 9 23:59:46.658661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.680353 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.706165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:46.731081 kernel: hv_utils: Registering HyperV Utility Driver May 9 23:59:46.731132 kernel: hv_vmbus: registering driver hv_utils May 9 23:59:46.750551 kernel: hv_utils: Heartbeat IC version 3.0 May 9 23:59:46.750599 kernel: hv_utils: Shutdown IC version 3.2 May 9 23:59:46.750610 kernel: hv_vmbus: registering driver hv_pci May 9 23:59:46.750620 kernel: hv_utils: TimeSync IC version 4.0 May 9 23:59:46.772731 systemd-resolved[326]: Clock change detected. Flushing caches. May 9 23:59:46.790214 kernel: hv_pci f40c62b7-2690-4fd4-93b5-47ae0823cd3d: PCI VMBus probing: Using version 0x10004 May 9 23:59:46.790378 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 9 23:59:46.779513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:46.819651 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 23:59:46.819673 kernel: hv_pci f40c62b7-2690-4fd4-93b5-47ae0823cd3d: PCI host bridge to bus 2690:00 May 9 23:59:46.819826 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 9 23:59:46.817897 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:46.848053 kernel: pci_bus 2690:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 9 23:59:46.848271 kernel: pci_bus 2690:00: No busn resource found for root bus, will use [bus 00-ff] May 9 23:59:46.865969 kernel: pci 2690:00:02.0: [15b3:1018] type 00 class 0x020000 May 9 23:59:46.866044 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 9 23:59:46.866207 kernel: pci 2690:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:59:46.875598 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 9 23:59:46.875815 kernel: pci 2690:00:02.0: enabling Extended Tags May 9 23:59:46.884232 kernel: sd 0:0:0:0: [sda] Write Protect is off May 9 23:59:46.884758 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 9 23:59:46.904743 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 9 23:59:46.904930 kernel: pci 2690:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2690:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 9 23:59:46.905862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:46.942592 kernel: pci_bus 2690:00: busn_res: [bus 00-ff] end is updated to 00 May 9 23:59:46.943029 kernel: pci 2690:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 9 23:59:46.943211 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:46.943223 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 9 23:59:46.985294 kernel: mlx5_core 2690:00:02.0: enabling device (0000 -> 0002) May 9 23:59:46.991736 kernel: mlx5_core 2690:00:02.0: firmware version: 16.30.1284 May 9 23:59:47.193702 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: VF registering: eth1 May 9 23:59:47.193939 kernel: mlx5_core 2690:00:02.0 eth1: joined to eth0 May 9 23:59:47.200979 kernel: mlx5_core 2690:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 9 23:59:47.212745 kernel: mlx5_core 2690:00:02.0 enP9872s1: renamed from eth1 May 9 23:59:47.890993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 9 23:59:47.929745 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (496) May 9 23:59:47.943583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 9 23:59:48.019573 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 9 23:59:48.060762 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (487) May 9 23:59:48.075714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 9 23:59:48.083198 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 9 23:59:48.112058 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:59:48.138748 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:48.147750 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:49.158603 disk-uuid[603]: The operation has completed successfully. May 9 23:59:49.164082 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 23:59:49.221145 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:59:49.221244 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:59:49.257974 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:59:49.271856 sh[689]: Success May 9 23:59:49.293802 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:59:49.603610 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:59:49.627256 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:59:49.633240 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:59:49.670743 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:59:49.670803 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:49.679028 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:59:49.684314 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:59:49.688482 kernel: BTRFS info (device dm-0): using free space tree May 9 23:59:50.339516 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:59:50.345756 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:59:50.370025 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:59:50.381955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:59:50.416564 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:50.416593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:50.416603 kernel: BTRFS info (device sda6): using free space tree May 9 23:59:50.451761 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:59:50.467990 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:59:50.473799 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:50.479922 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:59:50.496001 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:59:50.523436 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:50.542894 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:50.571089 systemd-networkd[873]: lo: Link UP May 9 23:59:50.571103 systemd-networkd[873]: lo: Gained carrier May 9 23:59:50.572690 systemd-networkd[873]: Enumeration completed May 9 23:59:50.572812 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:59:50.577813 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:50.577816 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:59:50.586124 systemd[1]: Reached target network.target - Network. May 9 23:59:50.673752 kernel: mlx5_core 2690:00:02.0 enP9872s1: Link up May 9 23:59:50.716834 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: Data path switched to VF: enP9872s1 May 9 23:59:50.717041 systemd-networkd[873]: enP9872s1: Link UP May 9 23:59:50.717152 systemd-networkd[873]: eth0: Link UP May 9 23:59:50.717252 systemd-networkd[873]: eth0: Gained carrier May 9 23:59:50.717261 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:50.735030 systemd-networkd[873]: enP9872s1: Gained carrier May 9 23:59:50.752792 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 9 23:59:51.939219 ignition[842]: Ignition 2.19.0 May 9 23:59:51.939234 ignition[842]: Stage: fetch-offline May 9 23:59:51.945014 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:51.939278 ignition[842]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.939287 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:51.939381 ignition[842]: parsed url from cmdline: "" May 9 23:59:51.939383 ignition[842]: no config URL provided May 9 23:59:51.939388 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:59:51.974021 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:59:51.939395 ignition[842]: no config at "/usr/lib/ignition/user.ign" May 9 23:59:51.939400 ignition[842]: failed to fetch config: resource requires networking May 9 23:59:51.939771 ignition[842]: Ignition finished successfully May 9 23:59:51.995228 ignition[884]: Ignition 2.19.0 May 9 23:59:51.995235 ignition[884]: Stage: fetch May 9 23:59:51.995485 ignition[884]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.995498 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:51.995615 ignition[884]: parsed url from cmdline: "" May 9 23:59:51.995618 ignition[884]: no config URL provided May 9 23:59:51.995623 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:59:51.995630 ignition[884]: no config at "/usr/lib/ignition/user.ign" May 9 23:59:51.995659 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 9 23:59:52.122613 ignition[884]: GET result: OK May 9 23:59:52.122701 ignition[884]: config has been read from IMDS userdata May 9 23:59:52.122774 ignition[884]: parsing config with SHA512: 6eebbf69e011fa43ebc1647f13b82d895dbed2435bad1b1243fed51f9225f14c76c4b08d51b551cf0ad289cebe32f6a8f6933ffea32d21b43e317912cb01396c May 9 23:59:52.127464 unknown[884]: fetched base config from "system" May 9 23:59:52.127969 ignition[884]: fetch: fetch complete May 9 23:59:52.127474 unknown[884]: fetched base config from "system" May 9 23:59:52.127975 ignition[884]: fetch: fetch passed May 9 23:59:52.127479 unknown[884]: fetched user config from "azure" May 9 23:59:52.128024 ignition[884]: Ignition finished successfully May 9 23:59:52.133692 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:59:52.180420 ignition[890]: Ignition 2.19.0 May 9 23:59:52.153122 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:59:52.180431 ignition[890]: Stage: kargs May 9 23:59:52.188304 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:59:52.180634 ignition[890]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:52.196789 systemd-networkd[873]: enP9872s1: Gained IPv6LL May 9 23:59:52.180644 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:52.214010 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:59:52.181677 ignition[890]: kargs: kargs passed May 9 23:59:52.240679 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:59:52.181753 ignition[890]: Ignition finished successfully May 9 23:59:52.248465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:59:52.237260 ignition[897]: Ignition 2.19.0 May 9 23:59:52.260297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:59:52.237267 ignition[897]: Stage: disks May 9 23:59:52.273463 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:52.237487 ignition[897]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:52.283615 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:59:52.237498 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:52.296690 systemd[1]: Reached target basic.target - Basic System. May 9 23:59:52.238882 ignition[897]: disks: disks passed May 9 23:59:52.321992 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:59:52.238940 ignition[897]: Ignition finished successfully May 9 23:59:52.433671 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 9 23:59:52.441625 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:59:52.449086 systemd-networkd[873]: eth0: Gained IPv6LL May 9 23:59:52.458986 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:59:52.518756 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:59:52.520085 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:59:52.526023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:59:52.597867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:52.609648 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:59:52.620931 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 9 23:59:52.643845 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (916) May 9 23:59:52.637669 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:59:52.677246 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:52.677272 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:52.677283 kernel: BTRFS info (device sda6): using free space tree May 9 23:59:52.637707 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:52.683215 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:59:52.705742 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:59:52.711015 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:59:52.724135 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:53.611056 coreos-metadata[918]: May 09 23:59:53.611 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 9 23:59:53.619602 coreos-metadata[918]: May 09 23:59:53.619 INFO Fetch successful May 9 23:59:53.619602 coreos-metadata[918]: May 09 23:59:53.619 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 9 23:59:53.637542 coreos-metadata[918]: May 09 23:59:53.637 INFO Fetch successful May 9 23:59:53.637542 coreos-metadata[918]: May 09 23:59:53.637 INFO wrote hostname ci-4081.3.3-n-4cc30cd86c to /sysroot/etc/hostname May 9 23:59:53.644839 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:59:54.002062 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:59:54.035668 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory May 9 23:59:54.046554 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:59:54.057653 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:59:56.002679 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:59:56.021005 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:59:56.028532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:59:56.052493 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:59:56.060761 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:56.078762 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:59:56.093461 ignition[1038]: INFO : Ignition 2.19.0 May 9 23:59:56.093461 ignition[1038]: INFO : Stage: mount May 9 23:59:56.102467 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:56.102467 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:56.102467 ignition[1038]: INFO : mount: mount passed May 9 23:59:56.102467 ignition[1038]: INFO : Ignition finished successfully May 9 23:59:56.103533 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:59:56.128891 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:59:56.142935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:56.172747 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) May 9 23:59:56.186610 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:56.186659 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:56.190974 kernel: BTRFS info (device sda6): using free space tree May 9 23:59:56.197753 kernel: BTRFS info (device sda6): auto enabling async discard May 9 23:59:56.199168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:56.222021 ignition[1066]: INFO : Ignition 2.19.0 May 9 23:59:56.222021 ignition[1066]: INFO : Stage: files May 9 23:59:56.229938 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:56.229938 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:56.229938 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping May 9 23:59:56.247741 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:59:56.247741 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:59:56.542208 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:59:56.550439 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:59:56.550439 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:59:56.543149 unknown[1066]: wrote ssh authorized keys file for user: core May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:56.571889 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 23:59:56.613521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:56.659743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 23:59:56.984111 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:59:57.184126 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:57.184126 ignition[1066]: INFO : files: op(c): [started] processing unit "containerd.service" May 9 23:59:57.241188 ignition[1066]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(c): [finished] processing unit "containerd.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:59:57.255088 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:57.255088 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:57.255088 ignition[1066]: INFO : files: files passed May 9 23:59:57.255088 ignition[1066]: INFO : Ignition finished successfully May 9 23:59:57.256148 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:59:57.302027 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:59:57.322916 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:59:57.409064 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:57.409064 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:57.334494 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:59:57.444117 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:57.334660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:59:57.409813 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:57.424444 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:59:57.459942 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:59:57.494059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:59:57.494191 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:59:57.506850 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:59:57.520210 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:59:57.531902 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:59:57.547975 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:59:57.568439 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:57.587001 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:59:57.607529 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:59:57.607657 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:59:57.620145 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:57.632869 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:57.645651 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:59:57.656779 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:59:57.656860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:57.673044 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:59:57.684873 systemd[1]: Stopped target basic.target - Basic System. May 9 23:59:57.695505 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:59:57.706103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:57.718072 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:59:57.730871 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:59:57.742268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:57.754432 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:59:57.766533 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:59:57.777925 systemd[1]: Stopped target swap.target - Swaps. May 9 23:59:57.787805 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:59:57.787890 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:57.803513 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:57.809928 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:57.822321 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:59:57.822381 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:57.835003 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:59:57.835072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:59:57.852252 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:59:57.852322 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:57.866277 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:59:57.866333 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:59:57.876988 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 9 23:59:57.877027 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 23:59:57.941688 ignition[1119]: INFO : Ignition 2.19.0 May 9 23:59:57.941688 ignition[1119]: INFO : Stage: umount May 9 23:59:57.941688 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:57.941688 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 9 23:59:57.941688 ignition[1119]: INFO : umount: umount passed May 9 23:59:57.941688 ignition[1119]: INFO : Ignition finished successfully May 9 23:59:57.909918 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:59:57.926862 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:59:57.926958 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:57.943930 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:59:57.949487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:59:57.949556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:57.956305 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:59:57.956356 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:57.978983 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:59:57.979461 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:59:57.979585 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:59:57.985749 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:59:57.985810 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:59:58.005418 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:59:58.005497 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:59:58.012088 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:59:58.012142 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:59:58.022374 systemd[1]: Stopped target network.target - Network. May 9 23:59:58.033219 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:59:58.033283 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:58.040652 systemd[1]: Stopped target paths.target - Path Units. May 9 23:59:58.050530 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:59:58.056245 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:58.063376 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:59:58.074844 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:59:58.084999 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:59:58.085048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:58.096304 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:59:58.096345 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:58.107866 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:59:58.107920 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:59:58.118447 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:59:58.118490 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:59:58.130156 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:59:58.136347 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:59:58.392612 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: Data path switched from VF: enP9872s1 May 9 23:59:58.158801 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:59:58.158964 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:59:58.163280 systemd-networkd[873]: eth0: DHCPv6 lease lost May 9 23:59:58.172833 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:59:58.172956 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:59:58.181937 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:59:58.182160 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:58.210912 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:59:58.220575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:59:58.220645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:58.232461 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:58.232511 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:58.243944 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:59:58.243993 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:59:58.255946 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:59:58.255989 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:58.268308 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:58.314438 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:59:58.314606 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:58.327258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:59:58.327306 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:59:58.338302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:59:58.338349 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:58.349847 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:59:58.349899 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:58.366486 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:59:58.366530 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:59:58.386797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:58.386898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:58.419950 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:59:58.436273 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:59:58.436347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:58.448243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:58.448290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:58.461045 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:59:58.461161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:59:58.472222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:59:58.472313 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:59:58.485293 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:59:58.485404 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:59:58.499569 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:59:58.499670 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:59:58.510764 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:59:58.540983 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:59:58.671838 systemd[1]: Switching root. May 9 23:59:58.731164 systemd-journald[217]: Journal stopped May 10 00:00:03.615549 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 10 00:00:03.615573 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:00:03.615584 kernel: SELinux: policy capability open_perms=1 May 10 00:00:03.615594 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:00:03.615602 kernel: SELinux: policy capability always_check_network=0 May 10 00:00:03.615610 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:00:03.615619 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:00:03.615629 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:00:03.615638 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:00:03.615646 kernel: audit: type=1403 audit(1746835200.524:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:00:03.615657 systemd[1]: Successfully loaded SELinux policy in 78.397ms. May 10 00:00:03.615668 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.253ms. May 10 00:00:03.615678 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:00:03.615688 systemd[1]: Detected virtualization microsoft. May 10 00:00:03.615698 systemd[1]: Detected architecture arm64. May 10 00:00:03.615709 systemd[1]: Detected first boot. May 10 00:00:03.615730 systemd[1]: Hostname set to . May 10 00:00:03.615741 systemd[1]: Initializing machine ID from random generator. May 10 00:00:03.615751 zram_generator::config[1178]: No configuration found. May 10 00:00:03.615761 systemd[1]: Populated /etc with preset unit settings. May 10 00:00:03.615770 systemd[1]: Queued start job for default target multi-user.target. May 10 00:00:03.615781 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 10 00:00:03.615792 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 00:00:03.615802 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 00:00:03.615811 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 00:00:03.615822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 00:00:03.615832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 00:00:03.615841 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 00:00:03.615853 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 00:00:03.615863 systemd[1]: Created slice user.slice - User and Session Slice. May 10 00:00:03.615872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:00:03.615882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:00:03.615892 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 00:00:03.615901 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 00:00:03.615911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 00:00:03.615921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:00:03.615930 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 10 00:00:03.615941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:00:03.615951 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 00:00:03.615961 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:00:03.615973 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:00:03.615983 systemd[1]: Reached target slices.target - Slice Units. May 10 00:00:03.615993 systemd[1]: Reached target swap.target - Swaps. May 10 00:00:03.616003 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 00:00:03.616014 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 00:00:03.616024 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 00:00:03.616033 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 10 00:00:03.616043 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:00:03.616053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:00:03.616063 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:00:03.616073 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 00:00:03.616085 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 00:00:03.616095 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 00:00:03.616105 systemd[1]: Mounting media.mount - External Media Directory... May 10 00:00:03.616115 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 00:00:03.616125 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 00:00:03.616136 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 00:00:03.616147 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 00:00:03.616157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:00:03.616167 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:00:03.616177 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 00:00:03.616187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:00:03.616197 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:00:03.616207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:00:03.616217 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 00:00:03.616227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:00:03.616239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:00:03.616250 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 10 00:00:03.616261 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 10 00:00:03.616272 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:00:03.616281 kernel: fuse: init (API version 7.39) May 10 00:00:03.616290 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:00:03.616301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 00:00:03.616325 systemd-journald[1287]: Collecting audit messages is disabled. May 10 00:00:03.616347 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 00:00:03.616358 systemd-journald[1287]: Journal started May 10 00:00:03.616380 systemd-journald[1287]: Runtime Journal (/run/log/journal/f04121be281d43139a0a7bedac7508c0) is 8.0M, max 78.5M, 70.5M free. May 10 00:00:03.632731 kernel: loop: module loaded May 10 00:00:03.632783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:00:03.660741 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:00:03.661698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 00:00:03.667825 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 00:00:03.674146 systemd[1]: Mounted media.mount - External Media Directory. May 10 00:00:03.680424 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 00:00:03.687554 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 00:00:03.693990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 00:00:03.699889 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 00:00:03.706400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:00:03.713671 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:00:03.713916 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 00:00:03.723585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:00:03.723760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:00:03.736078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:00:03.736248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:00:03.746316 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:00:03.746465 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 00:00:03.756563 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:00:03.756735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:00:03.767122 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:00:03.774219 kernel: ACPI: bus type drm_connector registered May 10 00:00:03.775113 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:00:03.775358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:00:03.781709 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 00:00:03.789304 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 00:00:03.796474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:00:03.814077 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 00:00:03.823797 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 00:00:03.834848 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 00:00:03.841374 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:00:03.879851 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 00:00:03.887216 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 00:00:03.893803 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:00:03.895943 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 00:00:03.902117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:00:03.903257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:00:03.913996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:00:03.923902 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 10 00:00:03.932047 systemd-journald[1287]: Time spent on flushing to /var/log/journal/f04121be281d43139a0a7bedac7508c0 is 19.241ms for 886 entries. May 10 00:00:03.932047 systemd-journald[1287]: System Journal (/var/log/journal/f04121be281d43139a0a7bedac7508c0) is 8.0M, max 2.6G, 2.6G free. May 10 00:00:03.994172 systemd-journald[1287]: Received client request to flush runtime journal. May 10 00:00:03.940928 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 00:00:03.948992 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 00:00:03.956421 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 00:00:03.967308 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 00:00:03.974787 udevadm[1340]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:00:03.996711 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 00:00:04.075775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:00:04.108499 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 10 00:00:04.108518 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 10 00:00:04.114121 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:00:04.125914 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 00:00:04.159200 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 00:00:04.173917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:00:04.188315 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. May 10 00:00:04.188673 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. May 10 00:00:04.192840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:00:05.674619 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 00:00:05.686916 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:00:05.725418 systemd-udevd[1364]: Using default interface naming scheme 'v255'. May 10 00:00:05.781076 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:00:05.808172 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:00:05.879580 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 10 00:00:05.949857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 00:00:05.992756 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:00:06.004708 kernel: hv_vmbus: registering driver hv_balloon May 10 00:00:06.004811 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 10 00:00:06.013745 kernel: hv_balloon: Memory hot add disabled on ARM64 May 10 00:00:06.015012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:00:06.025264 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 00:00:06.045750 kernel: hv_vmbus: registering driver hyperv_fb May 10 00:00:06.045709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:00:06.046003 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:00:06.066838 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 10 00:00:06.066941 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 10 00:00:06.072595 kernel: Console: switching to colour dummy device 80x25 May 10 00:00:06.072639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:00:06.080850 kernel: Console: switching to colour frame buffer device 128x48 May 10 00:00:06.088593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:00:06.088883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:00:06.104862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:00:06.144755 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1379) May 10 00:00:06.210846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 10 00:00:06.271148 systemd-networkd[1380]: lo: Link UP May 10 00:00:06.271157 systemd-networkd[1380]: lo: Gained carrier May 10 00:00:06.273006 systemd-networkd[1380]: Enumeration completed May 10 00:00:06.273151 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:00:06.273337 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:00:06.273341 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:00:06.287482 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 00:00:06.336738 kernel: mlx5_core 2690:00:02.0 enP9872s1: Link up May 10 00:00:06.362752 kernel: hv_netvsc 000d3afe-e1d6-000d-3afe-e1d6000d3afe eth0: Data path switched to VF: enP9872s1 May 10 00:00:06.363627 systemd-networkd[1380]: enP9872s1: Link UP May 10 00:00:06.363748 systemd-networkd[1380]: eth0: Link UP May 10 00:00:06.363752 systemd-networkd[1380]: eth0: Gained carrier May 10 00:00:06.363765 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:00:06.373067 systemd-networkd[1380]: enP9872s1: Gained carrier May 10 00:00:06.375735 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 10 00:00:06.386794 systemd-networkd[1380]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 10 00:00:06.387897 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 10 00:00:06.504808 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:00:06.532122 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 10 00:00:06.540210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:00:06.554948 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 10 00:00:06.565660 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:00:06.589140 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 10 00:00:06.597197 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 00:00:06.605236 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:00:06.605259 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:00:06.611155 systemd[1]: Reached target machines.target - Containers. May 10 00:00:06.617389 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 10 00:00:06.629907 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 00:00:06.641931 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 00:00:06.647960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:00:06.650913 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 00:00:06.660607 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 10 00:00:06.669895 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 00:00:06.680548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:00:06.690978 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 00:00:06.713259 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 00:00:06.731777 kernel: loop0: detected capacity change from 0 to 31320 May 10 00:00:06.750655 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:00:06.752215 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 10 00:00:07.023879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:00:07.059750 kernel: loop1: detected capacity change from 0 to 114432 May 10 00:00:07.195740 kernel: loop2: detected capacity change from 0 to 194096 May 10 00:00:07.233753 kernel: loop3: detected capacity change from 0 to 114328 May 10 00:00:07.406749 kernel: loop4: detected capacity change from 0 to 31320 May 10 00:00:07.416742 kernel: loop5: detected capacity change from 0 to 114432 May 10 00:00:07.427780 kernel: loop6: detected capacity change from 0 to 194096 May 10 00:00:07.441060 kernel: loop7: detected capacity change from 0 to 114328 May 10 00:00:07.445113 (sd-merge)[1487]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 10 00:00:07.445957 (sd-merge)[1487]: Merged extensions into '/usr'. May 10 00:00:07.450236 systemd[1]: Reloading requested from client PID 1471 ('systemd-sysext') (unit systemd-sysext.service)... May 10 00:00:07.450251 systemd[1]: Reloading... May 10 00:00:07.514096 zram_generator::config[1521]: No configuration found. May 10 00:00:07.634263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:07.703498 systemd[1]: Reloading finished in 252 ms. May 10 00:00:07.720741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 00:00:07.734860 systemd[1]: Starting ensure-sysext.service... May 10 00:00:07.740497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:00:07.752153 systemd[1]: Reloading requested from client PID 1576 ('systemctl') (unit ensure-sysext.service)... May 10 00:00:07.752261 systemd[1]: Reloading... May 10 00:00:07.764757 systemd-tmpfiles[1577]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:00:07.765086 systemd-tmpfiles[1577]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 00:00:07.765855 systemd-tmpfiles[1577]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:00:07.766131 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. May 10 00:00:07.766181 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. May 10 00:00:07.774131 systemd-tmpfiles[1577]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:00:07.774147 systemd-tmpfiles[1577]: Skipping /boot May 10 00:00:07.783438 systemd-tmpfiles[1577]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:00:07.783453 systemd-tmpfiles[1577]: Skipping /boot May 10 00:00:07.827832 zram_generator::config[1608]: No configuration found. May 10 00:00:07.947930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:08.022079 systemd[1]: Reloading finished in 269 ms. May 10 00:00:08.035166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:00:08.059985 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:00:08.071656 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 00:00:08.085928 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 00:00:08.098221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:00:08.116865 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 00:00:08.132153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:00:08.138888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:00:08.154513 augenrules[1694]: No rules May 10 00:00:08.162012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:00:08.176233 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:00:08.186005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:00:08.187800 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:00:08.195523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 00:00:08.205095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:00:08.205361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:00:08.213515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:00:08.213677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:00:08.219498 systemd-resolved[1683]: Positive Trust Anchors: May 10 00:00:08.219515 systemd-resolved[1683]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:00:08.219546 systemd-resolved[1683]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:00:08.221612 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:00:08.223921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:00:08.225263 systemd-resolved[1683]: Using system hostname 'ci-4081.3.3-n-4cc30cd86c'. May 10 00:00:08.231685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:00:08.241484 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 00:00:08.256037 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 00:00:08.268914 systemd[1]: Reached target network.target - Network. May 10 00:00:08.275008 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:00:08.283410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:00:08.289938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:00:08.297264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:00:08.307000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:00:08.320817 systemd-networkd[1380]: eth0: Gained IPv6LL May 10 00:00:08.321320 systemd-networkd[1380]: enP9872s1: Gained IPv6LL May 10 00:00:08.322968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:00:08.328837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:00:08.329007 systemd[1]: Reached target time-set.target - System Time Set. May 10 00:00:08.334908 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:00:08.336157 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 00:00:08.344085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:00:08.344244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:00:08.351402 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:00:08.351546 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:00:08.358388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:00:08.358533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:00:08.365820 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:00:08.366017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:00:08.375267 systemd[1]: Finished ensure-sysext.service. May 10 00:00:08.383523 systemd[1]: Reached target network-online.target - Network is Online. May 10 00:00:08.390594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:00:08.390669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:00:13.654262 ldconfig[1468]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:00:13.665434 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 00:00:13.676937 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 00:00:13.696220 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 00:00:13.703379 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:00:13.710018 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 00:00:13.717618 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 00:00:13.725237 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 00:00:13.731684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 00:00:13.738976 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 00:00:13.746275 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:00:13.746310 systemd[1]: Reached target paths.target - Path Units. May 10 00:00:13.751240 systemd[1]: Reached target timers.target - Timer Units. May 10 00:00:13.758794 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 00:00:13.766567 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 00:00:13.772834 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 00:00:13.779558 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 00:00:13.785706 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:00:13.791421 systemd[1]: Reached target basic.target - Basic System. May 10 00:00:13.796838 systemd[1]: System is tainted: cgroupsv1 May 10 00:00:13.796879 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 00:00:13.796898 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 00:00:13.799190 systemd[1]: Starting chronyd.service - NTP client/server... May 10 00:00:13.808866 systemd[1]: Starting containerd.service - containerd container runtime... May 10 00:00:13.820880 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 00:00:13.829890 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 00:00:13.845952 (chronyd)[1739]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 10 00:00:13.849035 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 00:00:13.857922 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 00:00:13.858647 jq[1746]: false May 10 00:00:13.864858 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 00:00:13.864895 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 10 00:00:13.866875 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 10 00:00:13.874130 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 10 00:00:13.876852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:13.882487 KVP[1749]: KVP starting; pid is:1749 May 10 00:00:13.883871 chronyd[1752]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 10 00:00:13.887938 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 00:00:13.896880 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 00:00:13.897341 KVP[1749]: KVP LIC Version: 3.1 May 10 00:00:13.897766 kernel: hv_utils: KVP IC version 4.0 May 10 00:00:13.907461 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 00:00:13.917277 chronyd[1752]: Timezone right/UTC failed leap second check, ignoring May 10 00:00:13.918485 chronyd[1752]: Loaded seccomp filter (level 2) May 10 00:00:13.928939 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 00:00:13.939029 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 00:00:13.947914 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 00:00:13.958383 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:00:13.963182 systemd[1]: Starting update-engine.service - Update Engine... May 10 00:00:13.979785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 00:00:13.993068 systemd[1]: Started chronyd.service - NTP client/server. May 10 00:00:13.998345 jq[1776]: true May 10 00:00:14.004744 extend-filesystems[1747]: Found loop4 May 10 00:00:14.004744 extend-filesystems[1747]: Found loop5 May 10 00:00:14.004744 extend-filesystems[1747]: Found loop6 May 10 00:00:14.004744 extend-filesystems[1747]: Found loop7 May 10 00:00:14.004744 extend-filesystems[1747]: Found sda May 10 00:00:14.004744 extend-filesystems[1747]: Found sda1 May 10 00:00:14.004744 extend-filesystems[1747]: Found sda2 May 10 00:00:14.004744 extend-filesystems[1747]: Found sda3 May 10 00:00:14.004744 extend-filesystems[1747]: Found usr May 10 00:00:14.004744 extend-filesystems[1747]: Found sda4 May 10 00:00:14.004744 extend-filesystems[1747]: Found sda6 May 10 00:00:14.004744 extend-filesystems[1747]: Found sda7 May 10 00:00:14.004744 extend-filesystems[1747]: Found sda9 May 10 00:00:14.004744 extend-filesystems[1747]: Checking size of /dev/sda9 May 10 00:00:14.397427 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1805) May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.281 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.293 INFO Fetch successful May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.293 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.297 INFO Fetch successful May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.298 INFO Fetching http://168.63.129.16/machine/2157fa72-8b68-4e2b-b5b1-4bdb61f9602c/b470f33d%2D8504%2D45b9%2Da7fd%2Dc8cfc8722ab8.%5Fci%2D4081.3.3%2Dn%2D4cc30cd86c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.300 INFO Fetch successful May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.300 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 10 00:00:14.397503 coreos-metadata[1741]: May 10 00:00:14.315 INFO Fetch successful May 10 00:00:14.137014 dbus-daemon[1743]: [system] SELinux support is enabled May 10 00:00:14.010648 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:00:14.401178 extend-filesystems[1747]: Old size kept for /dev/sda9 May 10 00:00:14.401178 extend-filesystems[1747]: Found sr0 May 10 00:00:14.412157 update_engine[1773]: I20250510 00:00:14.107080 1773 main.cc:92] Flatcar Update Engine starting May 10 00:00:14.412157 update_engine[1773]: I20250510 00:00:14.159077 1773 update_check_scheduler.cc:74] Next update check in 11m30s May 10 00:00:14.169171 dbus-daemon[1743]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 00:00:14.010922 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 00:00:14.015412 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:00:14.015619 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 00:00:14.413828 tar[1786]: linux-arm64/helm May 10 00:00:14.028203 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 00:00:14.414347 jq[1790]: true May 10 00:00:14.049235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:00:14.049463 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 00:00:14.081216 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:00:14.416454 bash[1837]: Updated "/home/core/.ssh/authorized_keys" May 10 00:00:14.081439 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 00:00:14.097909 systemd-logind[1765]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:00:14.103814 systemd-logind[1765]: New seat seat0. May 10 00:00:14.104818 systemd[1]: Started systemd-logind.service - User Login Management. May 10 00:00:14.123308 (ntainerd)[1794]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 00:00:14.139253 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 00:00:14.161570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:00:14.161600 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 00:00:14.180897 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:00:14.180918 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 00:00:14.195325 systemd[1]: Started update-engine.service - Update Engine. May 10 00:00:14.206102 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:00:14.219031 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 00:00:14.328254 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 00:00:14.399395 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 10 00:00:14.470016 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 00:00:14.482009 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 00:00:14.553352 locksmithd[1821]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:00:14.946204 containerd[1794]: time="2025-05-10T00:00:14.946091800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 10 00:00:14.987894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:14.988777 (kubelet)[1890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:15.016739 containerd[1794]: time="2025-05-10T00:00:15.015792240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020361720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020397480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020414480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020602360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020622320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020687560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020700560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020963680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020981240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.020995440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:15.021295 containerd[1794]: time="2025-05-10T00:00:15.021006880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:00:15.021562 containerd[1794]: time="2025-05-10T00:00:15.021075040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:15.021562 containerd[1794]: time="2025-05-10T00:00:15.021259800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:15.022467 containerd[1794]: time="2025-05-10T00:00:15.022138120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:15.022467 containerd[1794]: time="2025-05-10T00:00:15.022160560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:00:15.022467 containerd[1794]: time="2025-05-10T00:00:15.022260560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:00:15.022467 containerd[1794]: time="2025-05-10T00:00:15.022300720Z" level=info msg="metadata content store policy set" policy=shared May 10 00:00:15.039794 containerd[1794]: time="2025-05-10T00:00:15.039291720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:00:15.039794 containerd[1794]: time="2025-05-10T00:00:15.039340840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:00:15.039794 containerd[1794]: time="2025-05-10T00:00:15.039356760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 10 00:00:15.039794 containerd[1794]: time="2025-05-10T00:00:15.039382960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 10 00:00:15.039794 containerd[1794]: time="2025-05-10T00:00:15.039406200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:00:15.039794 containerd[1794]: time="2025-05-10T00:00:15.039557640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.040185200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.041935880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.041966560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.041983920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.041997840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042012600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042025360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042049240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042065600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042077880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042095880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042110240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042141400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042557 containerd[1794]: time="2025-05-10T00:00:15.042160800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042174320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042187560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042209120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042223640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042235280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042255360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042268800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042300920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042313440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042326000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042338680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042373440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042397120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042408720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:00:15.042904 containerd[1794]: time="2025-05-10T00:00:15.042419560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043255160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043291040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043360280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043376840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043388400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043486080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043502760Z" level=info msg="NRI interface is disabled by configuration." May 10 00:00:15.043869 containerd[1794]: time="2025-05-10T00:00:15.043514000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:00:15.047056 containerd[1794]: time="2025-05-10T00:00:15.045875800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:00:15.047056 containerd[1794]: time="2025-05-10T00:00:15.046253960Z" level=info msg="Connect containerd service" May 10 00:00:15.047056 containerd[1794]: time="2025-05-10T00:00:15.046417320Z" level=info msg="using legacy CRI server" May 10 00:00:15.047056 containerd[1794]: time="2025-05-10T00:00:15.046430800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 00:00:15.047056 containerd[1794]: time="2025-05-10T00:00:15.046711120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:00:15.051417 containerd[1794]: time="2025-05-10T00:00:15.050992880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:00:15.051417 containerd[1794]: time="2025-05-10T00:00:15.051375040Z" level=info msg="Start subscribing containerd event" May 10 00:00:15.051561 containerd[1794]: time="2025-05-10T00:00:15.051546240Z" level=info msg="Start recovering state" May 10 00:00:15.054747 containerd[1794]: time="2025-05-10T00:00:15.053738440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:00:15.054747 containerd[1794]: time="2025-05-10T00:00:15.053860720Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:00:15.060981 containerd[1794]: time="2025-05-10T00:00:15.054884200Z" level=info msg="Start event monitor" May 10 00:00:15.060981 containerd[1794]: time="2025-05-10T00:00:15.054907920Z" level=info msg="Start snapshots syncer" May 10 00:00:15.060981 containerd[1794]: time="2025-05-10T00:00:15.054917400Z" level=info msg="Start cni network conf syncer for default" May 10 00:00:15.060981 containerd[1794]: time="2025-05-10T00:00:15.054924720Z" level=info msg="Start streaming server" May 10 00:00:15.055105 systemd[1]: Started containerd.service - containerd container runtime. May 10 00:00:15.061588 containerd[1794]: time="2025-05-10T00:00:15.061213200Z" level=info msg="containerd successfully booted in 0.118806s" May 10 00:00:15.120571 tar[1786]: linux-arm64/LICENSE May 10 00:00:15.120698 tar[1786]: linux-arm64/README.md May 10 00:00:15.136254 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 00:00:15.245546 sshd_keygen[1774]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:00:15.267549 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 00:00:15.282006 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 00:00:15.289808 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 10 00:00:15.299932 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:00:15.300182 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 00:00:15.315002 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 00:00:15.329443 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 10 00:00:15.352472 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 00:00:15.366205 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 00:00:15.378106 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 10 00:00:15.387703 systemd[1]: Reached target getty.target - Login Prompts. May 10 00:00:15.396990 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 00:00:15.406822 systemd[1]: Startup finished in 16.350s (kernel) + 14.959s (userspace) = 31.310s. May 10 00:00:15.513091 login[1931]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.515954 login[1932]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.526066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 00:00:15.533528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 00:00:15.537775 systemd-logind[1765]: New session 1 of user core. May 10 00:00:15.540858 systemd-logind[1765]: New session 2 of user core. May 10 00:00:15.552125 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 00:00:15.559068 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 00:00:15.577005 (systemd)[1942]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:00:15.599386 kubelet[1890]: E0510 00:00:15.599325 1890 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:15.602883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:15.603107 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:15.751222 systemd[1942]: Queued start job for default target default.target. May 10 00:00:15.751886 systemd[1942]: Created slice app.slice - User Application Slice. May 10 00:00:15.751907 systemd[1942]: Reached target paths.target - Paths. May 10 00:00:15.751918 systemd[1942]: Reached target timers.target - Timers. May 10 00:00:15.759815 systemd[1942]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 00:00:15.765144 systemd[1942]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 00:00:15.765193 systemd[1942]: Reached target sockets.target - Sockets. May 10 00:00:15.765205 systemd[1942]: Reached target basic.target - Basic System. May 10 00:00:15.765242 systemd[1942]: Reached target default.target - Main User Target. May 10 00:00:15.765265 systemd[1942]: Startup finished in 179ms. May 10 00:00:15.765360 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 00:00:15.769211 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 00:00:15.772784 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 00:00:17.331268 waagent[1927]: 2025-05-10T00:00:17.331171Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 10 00:00:17.337231 waagent[1927]: 2025-05-10T00:00:17.337167Z INFO Daemon Daemon OS: flatcar 4081.3.3 May 10 00:00:17.342232 waagent[1927]: 2025-05-10T00:00:17.342181Z INFO Daemon Daemon Python: 3.11.9 May 10 00:00:17.346762 waagent[1927]: 2025-05-10T00:00:17.346687Z INFO Daemon Daemon Run daemon May 10 00:00:17.350835 waagent[1927]: 2025-05-10T00:00:17.350791Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' May 10 00:00:17.360318 waagent[1927]: 2025-05-10T00:00:17.360265Z INFO Daemon Daemon Using waagent for provisioning May 10 00:00:17.365610 waagent[1927]: 2025-05-10T00:00:17.365563Z INFO Daemon Daemon Activate resource disk May 10 00:00:17.370357 waagent[1927]: 2025-05-10T00:00:17.370308Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 10 00:00:17.381343 waagent[1927]: 2025-05-10T00:00:17.381292Z INFO Daemon Daemon Found device: None May 10 00:00:17.386001 waagent[1927]: 2025-05-10T00:00:17.385956Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 10 00:00:17.394381 waagent[1927]: 2025-05-10T00:00:17.394336Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 10 00:00:17.407353 waagent[1927]: 2025-05-10T00:00:17.407295Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 10 00:00:17.413233 waagent[1927]: 2025-05-10T00:00:17.413182Z INFO Daemon Daemon Running default provisioning handler May 10 00:00:17.425405 waagent[1927]: 2025-05-10T00:00:17.425339Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 10 00:00:17.439639 waagent[1927]: 2025-05-10T00:00:17.439577Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 10 00:00:17.450182 waagent[1927]: 2025-05-10T00:00:17.450123Z INFO Daemon Daemon cloud-init is enabled: False May 10 00:00:17.455043 waagent[1927]: 2025-05-10T00:00:17.454993Z INFO Daemon Daemon Copying ovf-env.xml May 10 00:00:17.602608 waagent[1927]: 2025-05-10T00:00:17.601285Z INFO Daemon Daemon Successfully mounted dvd May 10 00:00:17.616629 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 10 00:00:17.620017 waagent[1927]: 2025-05-10T00:00:17.619942Z INFO Daemon Daemon Detect protocol endpoint May 10 00:00:17.624992 waagent[1927]: 2025-05-10T00:00:17.624940Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 10 00:00:17.630961 waagent[1927]: 2025-05-10T00:00:17.630914Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 10 00:00:17.637389 waagent[1927]: 2025-05-10T00:00:17.637346Z INFO Daemon Daemon Test for route to 168.63.129.16 May 10 00:00:17.642791 waagent[1927]: 2025-05-10T00:00:17.642743Z INFO Daemon Daemon Route to 168.63.129.16 exists May 10 00:00:17.647950 waagent[1927]: 2025-05-10T00:00:17.647893Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 10 00:00:17.707113 waagent[1927]: 2025-05-10T00:00:17.707059Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 10 00:00:17.713947 waagent[1927]: 2025-05-10T00:00:17.713913Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 10 00:00:17.719803 waagent[1927]: 2025-05-10T00:00:17.719753Z INFO Daemon Daemon Server preferred version:2015-04-05 May 10 00:00:18.060028 waagent[1927]: 2025-05-10T00:00:18.059876Z INFO Daemon Daemon Initializing goal state during protocol detection May 10 00:00:18.066883 waagent[1927]: 2025-05-10T00:00:18.066822Z INFO Daemon Daemon Forcing an update of the goal state. May 10 00:00:18.076176 waagent[1927]: 2025-05-10T00:00:18.076123Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 10 00:00:18.099426 waagent[1927]: 2025-05-10T00:00:18.099374Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 10 00:00:18.105194 waagent[1927]: 2025-05-10T00:00:18.105145Z INFO Daemon May 10 00:00:18.108047 waagent[1927]: 2025-05-10T00:00:18.108003Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b50c6f0f-3a9f-414d-b74d-ca76d62ca0fd eTag: 6980987104385597915 source: Fabric] May 10 00:00:18.119195 waagent[1927]: 2025-05-10T00:00:18.119147Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 10 00:00:18.126518 waagent[1927]: 2025-05-10T00:00:18.126467Z INFO Daemon May 10 00:00:18.129248 waagent[1927]: 2025-05-10T00:00:18.129206Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 10 00:00:18.140166 waagent[1927]: 2025-05-10T00:00:18.140128Z INFO Daemon Daemon Downloading artifacts profile blob May 10 00:00:18.225789 waagent[1927]: 2025-05-10T00:00:18.225678Z INFO Daemon Downloaded certificate {'thumbprint': '55A377650E6F015890E30A6BE3B596C07C066C23', 'hasPrivateKey': False} May 10 00:00:18.235726 waagent[1927]: 2025-05-10T00:00:18.235668Z INFO Daemon Downloaded certificate {'thumbprint': '47F2F52EE301DEC1E4CF38F0BC7541963C60021C', 'hasPrivateKey': True} May 10 00:00:18.245305 waagent[1927]: 2025-05-10T00:00:18.245251Z INFO Daemon Fetch goal state completed May 10 00:00:18.256089 waagent[1927]: 2025-05-10T00:00:18.256018Z INFO Daemon Daemon Starting provisioning May 10 00:00:18.260743 waagent[1927]: 2025-05-10T00:00:18.260686Z INFO Daemon Daemon Handle ovf-env.xml. May 10 00:00:18.264997 waagent[1927]: 2025-05-10T00:00:18.264956Z INFO Daemon Daemon Set hostname [ci-4081.3.3-n-4cc30cd86c] May 10 00:00:18.284759 waagent[1927]: 2025-05-10T00:00:18.279813Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-n-4cc30cd86c] May 10 00:00:18.286273 waagent[1927]: 2025-05-10T00:00:18.286215Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 10 00:00:18.292775 waagent[1927]: 2025-05-10T00:00:18.292689Z INFO Daemon Daemon Primary interface is [eth0] May 10 00:00:18.313557 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:00:18.313566 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:00:18.313596 systemd-networkd[1380]: eth0: DHCP lease lost May 10 00:00:18.314878 waagent[1927]: 2025-05-10T00:00:18.314778Z INFO Daemon Daemon Create user account if not exists May 10 00:00:18.320332 waagent[1927]: 2025-05-10T00:00:18.320280Z INFO Daemon Daemon User core already exists, skip useradd May 10 00:00:18.325773 waagent[1927]: 2025-05-10T00:00:18.325713Z INFO Daemon Daemon Configure sudoer May 10 00:00:18.330622 waagent[1927]: 2025-05-10T00:00:18.330565Z INFO Daemon Daemon Configure sshd May 10 00:00:18.330691 systemd-networkd[1380]: eth0: DHCPv6 lease lost May 10 00:00:18.335018 waagent[1927]: 2025-05-10T00:00:18.334955Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 10 00:00:18.347714 waagent[1927]: 2025-05-10T00:00:18.347663Z INFO Daemon Daemon Deploy ssh public key. May 10 00:00:18.366789 systemd-networkd[1380]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 10 00:00:19.447150 waagent[1927]: 2025-05-10T00:00:19.447081Z INFO Daemon Daemon Provisioning complete May 10 00:00:19.465260 waagent[1927]: 2025-05-10T00:00:19.465206Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 10 00:00:19.471311 waagent[1927]: 2025-05-10T00:00:19.471261Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 10 00:00:19.480639 waagent[1927]: 2025-05-10T00:00:19.480589Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 10 00:00:19.612650 waagent[2003]: 2025-05-10T00:00:19.612013Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 10 00:00:19.612650 waagent[2003]: 2025-05-10T00:00:19.612168Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 May 10 00:00:19.612650 waagent[2003]: 2025-05-10T00:00:19.612222Z INFO ExtHandler ExtHandler Python: 3.11.9 May 10 00:00:19.682379 waagent[2003]: 2025-05-10T00:00:19.682299Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 10 00:00:19.682760 waagent[2003]: 2025-05-10T00:00:19.682694Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:00:19.682932 waagent[2003]: 2025-05-10T00:00:19.682893Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:00:19.691158 waagent[2003]: 2025-05-10T00:00:19.691089Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 10 00:00:19.696711 waagent[2003]: 2025-05-10T00:00:19.696668Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 10 00:00:19.697304 waagent[2003]: 2025-05-10T00:00:19.697224Z INFO ExtHandler May 10 00:00:19.698756 waagent[2003]: 2025-05-10T00:00:19.697427Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b5d837e4-c0c7-43c0-b388-42727c9f88f4 eTag: 6980987104385597915 source: Fabric] May 10 00:00:19.698756 waagent[2003]: 2025-05-10T00:00:19.697783Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 10 00:00:19.698756 waagent[2003]: 2025-05-10T00:00:19.698355Z INFO ExtHandler May 10 00:00:19.698756 waagent[2003]: 2025-05-10T00:00:19.698425Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 10 00:00:19.702298 waagent[2003]: 2025-05-10T00:00:19.702260Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 10 00:00:19.790831 waagent[2003]: 2025-05-10T00:00:19.790750Z INFO ExtHandler Downloaded certificate {'thumbprint': '55A377650E6F015890E30A6BE3B596C07C066C23', 'hasPrivateKey': False} May 10 00:00:19.791374 waagent[2003]: 2025-05-10T00:00:19.791329Z INFO ExtHandler Downloaded certificate {'thumbprint': '47F2F52EE301DEC1E4CF38F0BC7541963C60021C', 'hasPrivateKey': True} May 10 00:00:19.791993 waagent[2003]: 2025-05-10T00:00:19.791932Z INFO ExtHandler Fetch goal state completed May 10 00:00:19.807561 waagent[2003]: 2025-05-10T00:00:19.807507Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2003 May 10 00:00:19.807829 waagent[2003]: 2025-05-10T00:00:19.807786Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 10 00:00:19.809547 waagent[2003]: 2025-05-10T00:00:19.809498Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] May 10 00:00:19.810074 waagent[2003]: 2025-05-10T00:00:19.810025Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 10 00:00:19.856372 waagent[2003]: 2025-05-10T00:00:19.856330Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 10 00:00:19.856711 waagent[2003]: 2025-05-10T00:00:19.856668Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 10 00:00:19.863397 waagent[2003]: 2025-05-10T00:00:19.863353Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 10 00:00:19.869895 systemd[1]: Reloading requested from client PID 2018 ('systemctl') (unit waagent.service)... May 10 00:00:19.870162 systemd[1]: Reloading... May 10 00:00:19.951751 zram_generator::config[2058]: No configuration found. May 10 00:00:20.058696 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:20.137160 systemd[1]: Reloading finished in 266 ms. May 10 00:00:20.160083 waagent[2003]: 2025-05-10T00:00:20.159990Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 10 00:00:20.165925 systemd[1]: Reloading requested from client PID 2111 ('systemctl') (unit waagent.service)... May 10 00:00:20.166067 systemd[1]: Reloading... May 10 00:00:20.244847 zram_generator::config[2148]: No configuration found. May 10 00:00:20.347332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:20.425202 systemd[1]: Reloading finished in 258 ms. May 10 00:00:20.445648 waagent[2003]: 2025-05-10T00:00:20.445460Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 10 00:00:20.445648 waagent[2003]: 2025-05-10T00:00:20.445628Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 10 00:00:21.216317 waagent[2003]: 2025-05-10T00:00:21.216231Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 10 00:00:21.216927 waagent[2003]: 2025-05-10T00:00:21.216872Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 10 00:00:21.217711 waagent[2003]: 2025-05-10T00:00:21.217623Z INFO ExtHandler ExtHandler Starting env monitor service. May 10 00:00:21.218201 waagent[2003]: 2025-05-10T00:00:21.218036Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 10 00:00:21.219163 waagent[2003]: 2025-05-10T00:00:21.218414Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:00:21.219163 waagent[2003]: 2025-05-10T00:00:21.218500Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:00:21.219163 waagent[2003]: 2025-05-10T00:00:21.218694Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 10 00:00:21.219163 waagent[2003]: 2025-05-10T00:00:21.218899Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 10 00:00:21.219163 waagent[2003]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 10 00:00:21.219163 waagent[2003]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 10 00:00:21.219163 waagent[2003]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 10 00:00:21.219163 waagent[2003]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 10 00:00:21.219163 waagent[2003]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 10 00:00:21.219163 waagent[2003]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 10 00:00:21.219505 waagent[2003]: 2025-05-10T00:00:21.219438Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 10 00:00:21.219675 waagent[2003]: 2025-05-10T00:00:21.219632Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:00:21.219795 waagent[2003]: 2025-05-10T00:00:21.219759Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 10 00:00:21.219931 waagent[2003]: 2025-05-10T00:00:21.219898Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:00:21.220301 waagent[2003]: 2025-05-10T00:00:21.220237Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 10 00:00:21.220390 waagent[2003]: 2025-05-10T00:00:21.220353Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 10 00:00:21.220528 waagent[2003]: 2025-05-10T00:00:21.220481Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 10 00:00:21.220754 waagent[2003]: 2025-05-10T00:00:21.220684Z INFO EnvHandler ExtHandler Configure routes May 10 00:00:21.221329 waagent[2003]: 2025-05-10T00:00:21.221242Z INFO EnvHandler ExtHandler Gateway:None May 10 00:00:21.221590 waagent[2003]: 2025-05-10T00:00:21.221550Z INFO EnvHandler ExtHandler Routes:None May 10 00:00:21.241053 waagent[2003]: 2025-05-10T00:00:21.241005Z INFO ExtHandler ExtHandler May 10 00:00:21.241251 waagent[2003]: 2025-05-10T00:00:21.241216Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 85325a64-4e90-4b16-b246-d7ede538c352 correlation f114a1b1-c469-4160-a515-ec01d117766f created: 2025-05-09T23:59:08.065742Z] May 10 00:00:21.241702 waagent[2003]: 2025-05-10T00:00:21.241663Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 10 00:00:21.242398 waagent[2003]: 2025-05-10T00:00:21.242359Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 10 00:00:21.297870 waagent[2003]: 2025-05-10T00:00:21.297802Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 67DF059A-1943-4E24-AF1F-8053A834AC9B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 10 00:00:21.357202 waagent[2003]: 2025-05-10T00:00:21.357118Z INFO MonitorHandler ExtHandler Network interfaces: May 10 00:00:21.357202 waagent[2003]: Executing ['ip', '-a', '-o', 'link']: May 10 00:00:21.357202 waagent[2003]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 10 00:00:21.357202 waagent[2003]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fe:e1:d6 brd ff:ff:ff:ff:ff:ff May 10 00:00:21.357202 waagent[2003]: 3: enP9872s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fe:e1:d6 brd ff:ff:ff:ff:ff:ff\ altname enP9872p0s2 May 10 00:00:21.357202 waagent[2003]: Executing ['ip', '-4', '-a', '-o', 'address']: May 10 00:00:21.357202 waagent[2003]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 10 00:00:21.357202 waagent[2003]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 10 00:00:21.357202 waagent[2003]: Executing ['ip', '-6', '-a', '-o', 'address']: May 10 00:00:21.357202 waagent[2003]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 10 00:00:21.357202 waagent[2003]: 2: eth0 inet6 fe80::20d:3aff:fefe:e1d6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 10 00:00:21.357202 waagent[2003]: 3: enP9872s1 inet6 fe80::20d:3aff:fefe:e1d6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 10 00:00:21.410007 waagent[2003]: 2025-05-10T00:00:21.409928Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 10 00:00:21.410007 waagent[2003]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:00:21.410007 waagent[2003]: pkts bytes target prot opt in out source destination May 10 00:00:21.410007 waagent[2003]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 10 00:00:21.410007 waagent[2003]: pkts bytes target prot opt in out source destination May 10 00:00:21.410007 waagent[2003]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:00:21.410007 waagent[2003]: pkts bytes target prot opt in out source destination May 10 00:00:21.410007 waagent[2003]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 10 00:00:21.410007 waagent[2003]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 10 00:00:21.410007 waagent[2003]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 10 00:00:21.412985 waagent[2003]: 2025-05-10T00:00:21.412922Z INFO EnvHandler ExtHandler Current Firewall rules: May 10 00:00:21.412985 waagent[2003]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:00:21.412985 waagent[2003]: pkts bytes target prot opt in out source destination May 10 00:00:21.412985 waagent[2003]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 10 00:00:21.412985 waagent[2003]: pkts bytes target prot opt in out source destination May 10 00:00:21.412985 waagent[2003]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:00:21.412985 waagent[2003]: pkts bytes target prot opt in out source destination May 10 00:00:21.412985 waagent[2003]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 10 00:00:21.412985 waagent[2003]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 10 00:00:21.412985 waagent[2003]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 10 00:00:21.413215 waagent[2003]: 2025-05-10T00:00:21.413192Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 10 00:00:25.853811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:00:25.859889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:25.969785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:25.972755 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:26.019324 kubelet[2250]: E0510 00:00:26.019254 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:26.023261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:26.023432 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:36.144127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:00:36.153884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:36.244504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:36.247211 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:36.288128 kubelet[2271]: E0510 00:00:36.288049 2271 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:36.291195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:36.291352 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:37.703979 chronyd[1752]: Selected source PHC0 May 10 00:00:46.394126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:00:46.402931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:46.496336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:46.499008 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:46.537134 kubelet[2292]: E0510 00:00:46.537061 2292 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:46.539858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:46.540040 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:54.126081 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 10 00:00:54.890127 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 00:00:54.902341 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:57364.service - OpenSSH per-connection server daemon (10.200.16.10:57364). May 10 00:00:55.368220 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 57364 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:00:55.369691 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:55.375518 systemd-logind[1765]: New session 3 of user core. May 10 00:00:55.377758 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 00:00:55.772990 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:57366.service - OpenSSH per-connection server daemon (10.200.16.10:57366). May 10 00:00:56.246777 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 57366 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:00:56.248193 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:56.253283 systemd-logind[1765]: New session 4 of user core. May 10 00:00:56.258050 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 00:00:56.593986 sshd[2306]: pam_unix(sshd:session): session closed for user core May 10 00:00:56.597662 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:57366.service: Deactivated successfully. May 10 00:00:56.600813 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:00:56.602101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:00:56.602874 systemd-logind[1765]: Session 4 logged out. Waiting for processes to exit. May 10 00:00:56.607949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:56.608889 systemd-logind[1765]: Removed session 4. May 10 00:00:56.687048 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:57370.service - OpenSSH per-connection server daemon (10.200.16.10:57370). May 10 00:00:56.718999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:56.719286 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:56.767510 kubelet[2328]: E0510 00:00:56.767450 2328 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:56.771949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:56.772115 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:57.163380 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 57370 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:00:57.164787 sshd[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:57.169558 systemd-logind[1765]: New session 5 of user core. May 10 00:00:57.177079 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 00:00:57.504944 sshd[2318]: pam_unix(sshd:session): session closed for user core May 10 00:00:57.508152 systemd-logind[1765]: Session 5 logged out. Waiting for processes to exit. May 10 00:00:57.509539 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:57370.service: Deactivated successfully. May 10 00:00:57.511143 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:00:57.513037 systemd-logind[1765]: Removed session 5. May 10 00:00:57.595012 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:57372.service - OpenSSH per-connection server daemon (10.200.16.10:57372). May 10 00:00:58.067191 sshd[2343]: Accepted publickey for core from 10.200.16.10 port 57372 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:00:58.068540 sshd[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:58.073514 systemd-logind[1765]: New session 6 of user core. May 10 00:00:58.079038 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 00:00:58.412955 sshd[2343]: pam_unix(sshd:session): session closed for user core May 10 00:00:58.415891 systemd-logind[1765]: Session 6 logged out. Waiting for processes to exit. May 10 00:00:58.416126 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:57372.service: Deactivated successfully. May 10 00:00:58.418886 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:00:58.419575 systemd-logind[1765]: Removed session 6. May 10 00:00:58.489989 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:57382.service - OpenSSH per-connection server daemon (10.200.16.10:57382). May 10 00:00:58.939516 sshd[2351]: Accepted publickey for core from 10.200.16.10 port 57382 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:00:58.940935 sshd[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:58.945311 systemd-logind[1765]: New session 7 of user core. May 10 00:00:58.954985 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 00:00:59.098372 update_engine[1773]: I20250510 00:00:59.097760 1773 update_attempter.cc:509] Updating boot flags... May 10 00:00:59.148779 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2367) May 10 00:00:59.230938 sudo[2394]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 00:00:59.231247 sudo[2394]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:00:59.250643 sudo[2394]: pam_unix(sudo:session): session closed for user root May 10 00:00:59.321542 sshd[2351]: pam_unix(sshd:session): session closed for user core May 10 00:00:59.326881 systemd-logind[1765]: Session 7 logged out. Waiting for processes to exit. May 10 00:00:59.327631 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:57382.service: Deactivated successfully. May 10 00:00:59.330466 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:00:59.331619 systemd-logind[1765]: Removed session 7. May 10 00:00:59.403962 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:43448.service - OpenSSH per-connection server daemon (10.200.16.10:43448). May 10 00:00:59.878129 sshd[2399]: Accepted publickey for core from 10.200.16.10 port 43448 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:00:59.879606 sshd[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:59.884693 systemd-logind[1765]: New session 8 of user core. May 10 00:00:59.891103 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:01:00.148517 sudo[2404]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 00:01:00.148846 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:01:00.152119 sudo[2404]: pam_unix(sudo:session): session closed for user root May 10 00:01:00.157668 sudo[2403]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 10 00:01:00.158065 sudo[2403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:01:00.176078 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 10 00:01:00.177889 auditctl[2407]: No rules May 10 00:01:00.178272 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:01:00.178561 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 10 00:01:00.183430 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:01:00.211843 augenrules[2426]: No rules May 10 00:01:00.213573 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:01:00.217205 sudo[2403]: pam_unix(sudo:session): session closed for user root May 10 00:01:00.293243 sshd[2399]: pam_unix(sshd:session): session closed for user core May 10 00:01:00.296577 systemd-logind[1765]: Session 8 logged out. Waiting for processes to exit. May 10 00:01:00.298345 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:43448.service: Deactivated successfully. May 10 00:01:00.301283 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:01:00.302239 systemd-logind[1765]: Removed session 8. May 10 00:01:00.377997 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:43450.service - OpenSSH per-connection server daemon (10.200.16.10:43450). May 10 00:01:00.851270 sshd[2435]: Accepted publickey for core from 10.200.16.10 port 43450 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:01:00.852707 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:00.858003 systemd-logind[1765]: New session 9 of user core. May 10 00:01:00.867085 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:01:01.121653 sudo[2439]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:01:01.122268 sudo[2439]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:01:02.537998 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 00:01:02.538662 (dockerd)[2455]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 00:01:03.493390 dockerd[2455]: time="2025-05-10T00:01:03.493324897Z" level=info msg="Starting up" May 10 00:01:03.972747 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3733750014-merged.mount: Deactivated successfully. May 10 00:01:04.060153 dockerd[2455]: time="2025-05-10T00:01:04.060107981Z" level=info msg="Loading containers: start." May 10 00:01:04.333778 kernel: Initializing XFRM netlink socket May 10 00:01:04.606532 systemd-networkd[1380]: docker0: Link UP May 10 00:01:04.626217 dockerd[2455]: time="2025-05-10T00:01:04.626055585Z" level=info msg="Loading containers: done." May 10 00:01:04.648499 dockerd[2455]: time="2025-05-10T00:01:04.648372110Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:01:04.648499 dockerd[2455]: time="2025-05-10T00:01:04.648497390Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 10 00:01:04.648761 dockerd[2455]: time="2025-05-10T00:01:04.648621950Z" level=info msg="Daemon has completed initialization" May 10 00:01:04.705194 dockerd[2455]: time="2025-05-10T00:01:04.705057762Z" level=info msg="API listen on /run/docker.sock" May 10 00:01:04.706547 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 00:01:04.969892 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck722288158-merged.mount: Deactivated successfully. May 10 00:01:06.476668 containerd[1794]: time="2025-05-10T00:01:06.476609510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:01:06.893995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 10 00:01:06.900924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:01:07.007978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:01:07.011758 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:01:07.057056 kubelet[2607]: E0510 00:01:07.056995 2607 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:01:07.062899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:01:07.063219 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:01:07.632780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164744539.mount: Deactivated successfully. May 10 00:01:08.945315 containerd[1794]: time="2025-05-10T00:01:08.945266490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:08.948520 containerd[1794]: time="2025-05-10T00:01:08.948479411Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 10 00:01:08.952928 containerd[1794]: time="2025-05-10T00:01:08.952873172Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:08.957528 containerd[1794]: time="2025-05-10T00:01:08.957478613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:08.958817 containerd[1794]: time="2025-05-10T00:01:08.958606613Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.481951583s" May 10 00:01:08.958817 containerd[1794]: time="2025-05-10T00:01:08.958644373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 10 00:01:08.978897 containerd[1794]: time="2025-05-10T00:01:08.978849578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:01:10.389363 containerd[1794]: time="2025-05-10T00:01:10.389292502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:10.392845 containerd[1794]: time="2025-05-10T00:01:10.392812143Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 10 00:01:10.399034 containerd[1794]: time="2025-05-10T00:01:10.398982105Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:10.404898 containerd[1794]: time="2025-05-10T00:01:10.404849788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:10.406031 containerd[1794]: time="2025-05-10T00:01:10.405885108Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.42698653s" May 10 00:01:10.406031 containerd[1794]: time="2025-05-10T00:01:10.405923428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 10 00:01:10.426391 containerd[1794]: time="2025-05-10T00:01:10.426343715Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:01:11.603785 containerd[1794]: time="2025-05-10T00:01:11.603239709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:11.605824 containerd[1794]: time="2025-05-10T00:01:11.605599789Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 10 00:01:11.609277 containerd[1794]: time="2025-05-10T00:01:11.609230191Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:11.614713 containerd[1794]: time="2025-05-10T00:01:11.614628433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:11.615781 containerd[1794]: time="2025-05-10T00:01:11.615748873Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.189334438s" May 10 00:01:11.616048 containerd[1794]: time="2025-05-10T00:01:11.615859593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 10 00:01:11.634644 containerd[1794]: time="2025-05-10T00:01:11.634567960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:01:12.705980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3313620487.mount: Deactivated successfully. May 10 00:01:13.486631 containerd[1794]: time="2025-05-10T00:01:13.486568762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:13.489946 containerd[1794]: time="2025-05-10T00:01:13.489748323Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 10 00:01:13.493320 containerd[1794]: time="2025-05-10T00:01:13.493271404Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:13.498827 containerd[1794]: time="2025-05-10T00:01:13.498761646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:13.499598 containerd[1794]: time="2025-05-10T00:01:13.499463206Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.864808766s" May 10 00:01:13.499598 containerd[1794]: time="2025-05-10T00:01:13.499497486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 10 00:01:13.517698 containerd[1794]: time="2025-05-10T00:01:13.517487413Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:01:14.198876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755523719.mount: Deactivated successfully. May 10 00:01:15.107766 containerd[1794]: time="2025-05-10T00:01:15.107422318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:15.111115 containerd[1794]: time="2025-05-10T00:01:15.110871199Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 10 00:01:15.114945 containerd[1794]: time="2025-05-10T00:01:15.114912641Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:15.121679 containerd[1794]: time="2025-05-10T00:01:15.121628923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:15.123148 containerd[1794]: time="2025-05-10T00:01:15.123006324Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.605482391s" May 10 00:01:15.123148 containerd[1794]: time="2025-05-10T00:01:15.123043724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:01:15.143894 containerd[1794]: time="2025-05-10T00:01:15.143794171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:01:15.761170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821878120.mount: Deactivated successfully. May 10 00:01:15.786397 containerd[1794]: time="2025-05-10T00:01:15.786349888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:15.789707 containerd[1794]: time="2025-05-10T00:01:15.789670969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 10 00:01:15.795418 containerd[1794]: time="2025-05-10T00:01:15.795385291Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:15.800023 containerd[1794]: time="2025-05-10T00:01:15.799966933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:15.800714 containerd[1794]: time="2025-05-10T00:01:15.800582653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 656.742442ms" May 10 00:01:15.800714 containerd[1794]: time="2025-05-10T00:01:15.800614933Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 10 00:01:15.821520 containerd[1794]: time="2025-05-10T00:01:15.821473061Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:01:16.497833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715014611.mount: Deactivated successfully. May 10 00:01:17.144026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 10 00:01:17.150324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:01:18.926889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:01:18.939084 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:01:18.985113 kubelet[2783]: E0510 00:01:18.985039 2783 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:01:18.988922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:01:18.989085 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:01:20.335771 containerd[1794]: time="2025-05-10T00:01:20.334983357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:20.337400 containerd[1794]: time="2025-05-10T00:01:20.337366837Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 10 00:01:20.342099 containerd[1794]: time="2025-05-10T00:01:20.342039599Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:20.347562 containerd[1794]: time="2025-05-10T00:01:20.347503920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:20.348838 containerd[1794]: time="2025-05-10T00:01:20.348801960Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.527284339s" May 10 00:01:20.348838 containerd[1794]: time="2025-05-10T00:01:20.348836000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 10 00:01:26.331872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:01:26.338260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:01:26.357496 systemd[1]: Reloading requested from client PID 2895 ('systemctl') (unit session-9.scope)... May 10 00:01:26.357651 systemd[1]: Reloading... May 10 00:01:26.474782 zram_generator::config[2945]: No configuration found. May 10 00:01:26.574965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:01:26.653392 systemd[1]: Reloading finished in 295 ms. May 10 00:01:26.701853 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:01:26.701931 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:01:26.702224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:01:26.707125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:01:26.812879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:01:26.818139 (kubelet)[3014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:01:26.854549 kubelet[3014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:01:26.854549 kubelet[3014]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:01:26.854549 kubelet[3014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:01:26.854549 kubelet[3014]: I0510 00:01:26.854655 3014 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:01:27.655034 kubelet[3014]: I0510 00:01:27.654997 3014 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:01:27.655034 kubelet[3014]: I0510 00:01:27.655027 3014 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:01:27.655487 kubelet[3014]: I0510 00:01:27.655456 3014 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:01:27.670026 kubelet[3014]: E0510 00:01:27.669970 3014 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.670371 kubelet[3014]: I0510 00:01:27.670268 3014 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:01:27.677877 kubelet[3014]: I0510 00:01:27.677845 3014 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:01:27.679315 kubelet[3014]: I0510 00:01:27.679286 3014 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:01:27.679742 kubelet[3014]: I0510 00:01:27.679402 3014 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-4cc30cd86c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:01:27.679742 kubelet[3014]: I0510 00:01:27.679570 3014 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:01:27.679742 kubelet[3014]: I0510 00:01:27.679579 3014 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:01:27.679742 kubelet[3014]: I0510 00:01:27.679695 3014 state_mem.go:36] "Initialized new in-memory state store" May 10 00:01:27.680639 kubelet[3014]: I0510 00:01:27.680626 3014 kubelet.go:400] "Attempting to sync node with API server" May 10 00:01:27.680927 kubelet[3014]: I0510 00:01:27.680915 3014 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:01:27.681036 kubelet[3014]: I0510 00:01:27.681016 3014 kubelet.go:312] "Adding apiserver pod source" May 10 00:01:27.681109 kubelet[3014]: I0510 00:01:27.681101 3014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:01:27.683895 kubelet[3014]: W0510 00:01:27.683814 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-4cc30cd86c&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.683895 kubelet[3014]: E0510 00:01:27.683863 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-4cc30cd86c&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.684772 kubelet[3014]: I0510 00:01:27.684059 3014 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:01:27.684772 kubelet[3014]: I0510 00:01:27.684204 3014 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:01:27.684772 kubelet[3014]: W0510 00:01:27.684240 3014 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:01:27.684901 kubelet[3014]: I0510 00:01:27.684887 3014 server.go:1264] "Started kubelet" May 10 00:01:27.690187 kubelet[3014]: I0510 00:01:27.690160 3014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:01:27.691027 kubelet[3014]: W0510 00:01:27.690489 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.691027 kubelet[3014]: E0510 00:01:27.690539 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.691027 kubelet[3014]: E0510 00:01:27.690575 3014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-4cc30cd86c.183e0170639055f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-4cc30cd86c,UID:ci-4081.3.3-n-4cc30cd86c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-4cc30cd86c,},FirstTimestamp:2025-05-10 00:01:27.68469964 +0000 UTC m=+0.863650336,LastTimestamp:2025-05-10 00:01:27.68469964 +0000 UTC m=+0.863650336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-4cc30cd86c,}" May 10 00:01:27.691643 kubelet[3014]: I0510 00:01:27.691606 3014 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:01:27.692462 kubelet[3014]: I0510 00:01:27.692424 3014 server.go:455] "Adding debug handlers to kubelet server" May 10 00:01:27.693283 kubelet[3014]: I0510 00:01:27.693219 3014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:01:27.693444 kubelet[3014]: I0510 00:01:27.693416 3014 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:01:27.694935 kubelet[3014]: I0510 00:01:27.694792 3014 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:01:27.694935 kubelet[3014]: I0510 00:01:27.694920 3014 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:01:27.696788 kubelet[3014]: I0510 00:01:27.696267 3014 reconciler.go:26] "Reconciler: start to sync state" May 10 00:01:27.696788 kubelet[3014]: W0510 00:01:27.696551 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.696788 kubelet[3014]: E0510 00:01:27.696588 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.698108 kubelet[3014]: E0510 00:01:27.697925 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4cc30cd86c?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" May 10 00:01:27.698868 kubelet[3014]: E0510 00:01:27.698847 3014 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:01:27.701755 kubelet[3014]: I0510 00:01:27.701432 3014 factory.go:221] Registration of the containerd container factory successfully May 10 00:01:27.701755 kubelet[3014]: I0510 00:01:27.701452 3014 factory.go:221] Registration of the systemd container factory successfully May 10 00:01:27.702007 kubelet[3014]: I0510 00:01:27.701986 3014 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:01:27.742058 kubelet[3014]: I0510 00:01:27.742013 3014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:01:27.743119 kubelet[3014]: I0510 00:01:27.743087 3014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:01:27.743201 kubelet[3014]: I0510 00:01:27.743128 3014 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:01:27.743201 kubelet[3014]: I0510 00:01:27.743146 3014 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:01:27.743201 kubelet[3014]: E0510 00:01:27.743186 3014 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:01:27.744560 kubelet[3014]: W0510 00:01:27.744501 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.744560 kubelet[3014]: E0510 00:01:27.744563 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:27.754645 kubelet[3014]: I0510 00:01:27.754622 3014 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:01:27.754645 kubelet[3014]: I0510 00:01:27.754638 3014 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:01:27.754774 kubelet[3014]: I0510 00:01:27.754657 3014 state_mem.go:36] "Initialized new in-memory state store" May 10 00:01:27.760176 kubelet[3014]: I0510 00:01:27.760156 3014 policy_none.go:49] "None policy: Start" May 10 00:01:27.760751 kubelet[3014]: I0510 00:01:27.760709 3014 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:01:27.760816 kubelet[3014]: I0510 00:01:27.760760 3014 state_mem.go:35] "Initializing new in-memory state store" May 10 00:01:27.769290 kubelet[3014]: I0510 00:01:27.769261 3014 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:01:27.769480 kubelet[3014]: I0510 00:01:27.769441 3014 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:01:27.769550 kubelet[3014]: I0510 00:01:27.769536 3014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:01:27.773470 kubelet[3014]: E0510 00:01:27.773444 3014 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-4cc30cd86c\" not found" May 10 00:01:27.796749 kubelet[3014]: I0510 00:01:27.796604 3014 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.797045 kubelet[3014]: E0510 00:01:27.796992 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.843345 kubelet[3014]: I0510 00:01:27.843293 3014 topology_manager.go:215] "Topology Admit Handler" podUID="12fa05c2ab93445bc4e27684fe2fc34b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.845018 kubelet[3014]: I0510 00:01:27.844917 3014 topology_manager.go:215] "Topology Admit Handler" podUID="2a372f997e0faad46630de7686548231" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.848290 kubelet[3014]: I0510 00:01:27.846374 3014 topology_manager.go:215] "Topology Admit Handler" podUID="8b177cba605348135042bcc6af3fc792" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.896597 kubelet[3014]: I0510 00:01:27.896564 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12fa05c2ab93445bc4e27684fe2fc34b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" (UID: \"12fa05c2ab93445bc4e27684fe2fc34b\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897102 kubelet[3014]: I0510 00:01:27.897058 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897203 kubelet[3014]: I0510 00:01:27.897191 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897298 kubelet[3014]: I0510 00:01:27.897287 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897410 kubelet[3014]: I0510 00:01:27.897395 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b177cba605348135042bcc6af3fc792-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-4cc30cd86c\" (UID: \"8b177cba605348135042bcc6af3fc792\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897506 kubelet[3014]: I0510 00:01:27.897492 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12fa05c2ab93445bc4e27684fe2fc34b-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" (UID: \"12fa05c2ab93445bc4e27684fe2fc34b\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897615 kubelet[3014]: I0510 00:01:27.897602 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12fa05c2ab93445bc4e27684fe2fc34b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" (UID: \"12fa05c2ab93445bc4e27684fe2fc34b\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897741 kubelet[3014]: I0510 00:01:27.897712 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.897846 kubelet[3014]: I0510 00:01:27.897832 3014 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.898921 kubelet[3014]: E0510 00:01:27.898890 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4cc30cd86c?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" May 10 00:01:27.998942 kubelet[3014]: I0510 00:01:27.998858 3014 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:27.999478 kubelet[3014]: E0510 00:01:27.999453 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:28.095012 kubelet[3014]: E0510 00:01:28.094916 3014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-4cc30cd86c.183e0170639055f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-4cc30cd86c,UID:ci-4081.3.3-n-4cc30cd86c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-4cc30cd86c,},FirstTimestamp:2025-05-10 00:01:27.68469964 +0000 UTC m=+0.863650336,LastTimestamp:2025-05-10 00:01:27.68469964 +0000 UTC m=+0.863650336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-4cc30cd86c,}" May 10 00:01:28.150093 containerd[1794]: time="2025-05-10T00:01:28.150051818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-4cc30cd86c,Uid:12fa05c2ab93445bc4e27684fe2fc34b,Namespace:kube-system,Attempt:0,}" May 10 00:01:28.150865 containerd[1794]: time="2025-05-10T00:01:28.150715938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-4cc30cd86c,Uid:2a372f997e0faad46630de7686548231,Namespace:kube-system,Attempt:0,}" May 10 00:01:28.154616 containerd[1794]: time="2025-05-10T00:01:28.154493899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-4cc30cd86c,Uid:8b177cba605348135042bcc6af3fc792,Namespace:kube-system,Attempt:0,}" May 10 00:01:28.299833 kubelet[3014]: E0510 00:01:28.299675 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4cc30cd86c?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" May 10 00:01:28.401659 kubelet[3014]: I0510 00:01:28.401629 3014 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:28.401998 kubelet[3014]: E0510 00:01:28.401974 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:28.710235 kubelet[3014]: W0510 00:01:28.710154 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-4cc30cd86c&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:28.710235 kubelet[3014]: E0510 00:01:28.710213 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-4cc30cd86c&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:28.907498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319237766.mount: Deactivated successfully. May 10 00:01:28.932064 containerd[1794]: time="2025-05-10T00:01:28.932012088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:01:28.932450 kubelet[3014]: W0510 00:01:28.931703 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:28.932828 kubelet[3014]: E0510 00:01:28.932803 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:28.950137 containerd[1794]: time="2025-05-10T00:01:28.950080654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 10 00:01:28.956057 containerd[1794]: time="2025-05-10T00:01:28.956020375Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:01:28.959544 containerd[1794]: time="2025-05-10T00:01:28.959512496Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:01:28.963096 containerd[1794]: time="2025-05-10T00:01:28.963012857Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:01:28.966789 containerd[1794]: time="2025-05-10T00:01:28.966701219Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:01:28.970034 containerd[1794]: time="2025-05-10T00:01:28.970005979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:01:28.974970 containerd[1794]: time="2025-05-10T00:01:28.974923341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:01:28.975843 containerd[1794]: time="2025-05-10T00:01:28.975589701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 824.792883ms" May 10 00:01:28.977215 containerd[1794]: time="2025-05-10T00:01:28.977181582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 822.634443ms" May 10 00:01:28.981489 containerd[1794]: time="2025-05-10T00:01:28.981449143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 831.313925ms" May 10 00:01:29.102353 kubelet[3014]: E0510 00:01:29.102300 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4cc30cd86c?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" May 10 00:01:29.126109 kubelet[3014]: W0510 00:01:29.126038 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:29.126109 kubelet[3014]: E0510 00:01:29.126108 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:29.147653 kubelet[3014]: W0510 00:01:29.147625 3014 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:29.147706 kubelet[3014]: E0510 00:01:29.147660 3014 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:29.204655 kubelet[3014]: I0510 00:01:29.204620 3014 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:29.205006 kubelet[3014]: E0510 00:01:29.204976 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:29.835637 containerd[1794]: time="2025-05-10T00:01:29.835532995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:29.835637 containerd[1794]: time="2025-05-10T00:01:29.835596635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:29.835637 containerd[1794]: time="2025-05-10T00:01:29.835613675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:29.836379 containerd[1794]: time="2025-05-10T00:01:29.835695595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:29.836379 containerd[1794]: time="2025-05-10T00:01:29.836058995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:29.836379 containerd[1794]: time="2025-05-10T00:01:29.836107395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:29.836379 containerd[1794]: time="2025-05-10T00:01:29.836123995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:29.836578 containerd[1794]: time="2025-05-10T00:01:29.836199755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:29.837914 containerd[1794]: time="2025-05-10T00:01:29.837855276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:29.838037 containerd[1794]: time="2025-05-10T00:01:29.837899676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:29.838037 containerd[1794]: time="2025-05-10T00:01:29.837911396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:29.838037 containerd[1794]: time="2025-05-10T00:01:29.837983916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:29.855770 kubelet[3014]: E0510 00:01:29.855693 3014 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.24:6443: connect: connection refused May 10 00:01:29.938164 containerd[1794]: time="2025-05-10T00:01:29.938126345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-4cc30cd86c,Uid:2a372f997e0faad46630de7686548231,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb7c69af1bee92a52b7c9acde333c052ce28a452e0b746e410a9c91d3b0b7a97\"" May 10 00:01:29.941932 containerd[1794]: time="2025-05-10T00:01:29.941852626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-4cc30cd86c,Uid:12fa05c2ab93445bc4e27684fe2fc34b,Namespace:kube-system,Attempt:0,} returns sandbox id \"07416cd9b125b8be5f12feea43fe856f491874d044253dc36651164040a041c8\"" May 10 00:01:29.943991 containerd[1794]: time="2025-05-10T00:01:29.943961827Z" level=info msg="CreateContainer within sandbox \"bb7c69af1bee92a52b7c9acde333c052ce28a452e0b746e410a9c91d3b0b7a97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:01:29.946584 containerd[1794]: time="2025-05-10T00:01:29.946175867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-4cc30cd86c,Uid:8b177cba605348135042bcc6af3fc792,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b507caa02fb72d6124390121a6cc4eb0be1d0d112b33e613006aa4aff39e4d2\"" May 10 00:01:29.946584 containerd[1794]: time="2025-05-10T00:01:29.946342508Z" level=info msg="CreateContainer within sandbox \"07416cd9b125b8be5f12feea43fe856f491874d044253dc36651164040a041c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:01:29.949796 containerd[1794]: time="2025-05-10T00:01:29.949645869Z" level=info msg="CreateContainer within sandbox \"4b507caa02fb72d6124390121a6cc4eb0be1d0d112b33e613006aa4aff39e4d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:01:29.986561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761969388.mount: Deactivated successfully. May 10 00:01:30.023956 containerd[1794]: time="2025-05-10T00:01:30.023899490Z" level=info msg="CreateContainer within sandbox \"bb7c69af1bee92a52b7c9acde333c052ce28a452e0b746e410a9c91d3b0b7a97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f5bed346cca6ac8276af3db839003c513b06c0ba823ddb31495f3fff5f62829\"" May 10 00:01:30.026838 containerd[1794]: time="2025-05-10T00:01:30.026698531Z" level=info msg="StartContainer for \"0f5bed346cca6ac8276af3db839003c513b06c0ba823ddb31495f3fff5f62829\"" May 10 00:01:30.030995 containerd[1794]: time="2025-05-10T00:01:30.030605852Z" level=info msg="CreateContainer within sandbox \"07416cd9b125b8be5f12feea43fe856f491874d044253dc36651164040a041c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43252bfce71c5719fe06bd8409c43789de61202478516b56aab9e5a499baf51e\"" May 10 00:01:30.031262 containerd[1794]: time="2025-05-10T00:01:30.031149013Z" level=info msg="StartContainer for \"43252bfce71c5719fe06bd8409c43789de61202478516b56aab9e5a499baf51e\"" May 10 00:01:30.032501 containerd[1794]: time="2025-05-10T00:01:30.032377253Z" level=info msg="CreateContainer within sandbox \"4b507caa02fb72d6124390121a6cc4eb0be1d0d112b33e613006aa4aff39e4d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c92dfde8cd291144b94aa8b40e161e16e5fdc5135bf5517abd02678fb9fffa9c\"" May 10 00:01:30.033457 containerd[1794]: time="2025-05-10T00:01:30.033384613Z" level=info msg="StartContainer for \"c92dfde8cd291144b94aa8b40e161e16e5fdc5135bf5517abd02678fb9fffa9c\"" May 10 00:01:30.141630 containerd[1794]: time="2025-05-10T00:01:30.141384245Z" level=info msg="StartContainer for \"43252bfce71c5719fe06bd8409c43789de61202478516b56aab9e5a499baf51e\" returns successfully" May 10 00:01:30.141630 containerd[1794]: time="2025-05-10T00:01:30.141507565Z" level=info msg="StartContainer for \"0f5bed346cca6ac8276af3db839003c513b06c0ba823ddb31495f3fff5f62829\" returns successfully" May 10 00:01:30.166479 containerd[1794]: time="2025-05-10T00:01:30.166415452Z" level=info msg="StartContainer for \"c92dfde8cd291144b94aa8b40e161e16e5fdc5135bf5517abd02678fb9fffa9c\" returns successfully" May 10 00:01:30.809732 kubelet[3014]: I0510 00:01:30.808136 3014 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:32.398510 kubelet[3014]: E0510 00:01:32.398461 3014 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-4cc30cd86c\" not found" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:32.541247 kubelet[3014]: I0510 00:01:32.541091 3014 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:32.692306 kubelet[3014]: I0510 00:01:32.691947 3014 apiserver.go:52] "Watching apiserver" May 10 00:01:32.695227 kubelet[3014]: I0510 00:01:32.695197 3014 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:01:34.595395 kubelet[3014]: W0510 00:01:34.595280 3014 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:01:34.675775 kubelet[3014]: W0510 00:01:34.675710 3014 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:01:34.805955 systemd[1]: Reloading requested from client PID 3293 ('systemctl') (unit session-9.scope)... May 10 00:01:34.805973 systemd[1]: Reloading... May 10 00:01:34.902929 zram_generator::config[3336]: No configuration found. May 10 00:01:35.039894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:01:35.125607 systemd[1]: Reloading finished in 319 ms. May 10 00:01:35.157903 kubelet[3014]: W0510 00:01:35.152855 3014 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:01:35.163570 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:01:35.164568 kubelet[3014]: E0510 00:01:35.164275 3014 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.3-n-4cc30cd86c.183e0170639055f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-4cc30cd86c,UID:ci-4081.3.3-n-4cc30cd86c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-4cc30cd86c,},FirstTimestamp:2025-05-10 00:01:27.68469964 +0000 UTC m=+0.863650336,LastTimestamp:2025-05-10 00:01:27.68469964 +0000 UTC m=+0.863650336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-4cc30cd86c,}" May 10 00:01:35.184899 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:01:35.187656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:01:35.197413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:01:35.332667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:01:35.340312 (kubelet)[3407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:01:35.393106 kubelet[3407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:01:35.393106 kubelet[3407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:01:35.393106 kubelet[3407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:01:35.393474 kubelet[3407]: I0510 00:01:35.393143 3407 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:01:35.397982 kubelet[3407]: I0510 00:01:35.397637 3407 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:01:35.397982 kubelet[3407]: I0510 00:01:35.397680 3407 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:01:35.397982 kubelet[3407]: I0510 00:01:35.397985 3407 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:01:35.399840 kubelet[3407]: I0510 00:01:35.399808 3407 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:01:35.401388 kubelet[3407]: I0510 00:01:35.401128 3407 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:01:35.419358 kubelet[3407]: I0510 00:01:35.416948 3407 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:01:35.419358 kubelet[3407]: I0510 00:01:35.417497 3407 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:01:35.419358 kubelet[3407]: I0510 00:01:35.417540 3407 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-4cc30cd86c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:01:35.419358 kubelet[3407]: I0510 00:01:35.417933 3407 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:01:35.419651 kubelet[3407]: I0510 00:01:35.417944 3407 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:01:35.419651 kubelet[3407]: I0510 00:01:35.417983 3407 state_mem.go:36] "Initialized new in-memory state store" May 10 00:01:35.419651 kubelet[3407]: I0510 00:01:35.418100 3407 kubelet.go:400] "Attempting to sync node with API server" May 10 00:01:35.419651 kubelet[3407]: I0510 00:01:35.418334 3407 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:01:35.419651 kubelet[3407]: I0510 00:01:35.418376 3407 kubelet.go:312] "Adding apiserver pod source" May 10 00:01:35.419651 kubelet[3407]: I0510 00:01:35.418391 3407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:01:35.426703 kubelet[3407]: I0510 00:01:35.425456 3407 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:01:35.426703 kubelet[3407]: I0510 00:01:35.425635 3407 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:01:35.431585 kubelet[3407]: I0510 00:01:35.428884 3407 server.go:1264] "Started kubelet" May 10 00:01:35.431585 kubelet[3407]: I0510 00:01:35.430519 3407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:01:35.434024 kubelet[3407]: I0510 00:01:35.433972 3407 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:01:35.434617 kubelet[3407]: I0510 00:01:35.434568 3407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:01:35.435420 kubelet[3407]: I0510 00:01:35.435397 3407 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:01:35.437662 kubelet[3407]: I0510 00:01:35.436760 3407 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:01:35.441885 kubelet[3407]: I0510 00:01:35.441864 3407 server.go:455] "Adding debug handlers to kubelet server" May 10 00:01:35.443599 kubelet[3407]: I0510 00:01:35.443563 3407 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:01:35.448882 kubelet[3407]: I0510 00:01:35.448862 3407 reconciler.go:26] "Reconciler: start to sync state" May 10 00:01:35.457729 kubelet[3407]: I0510 00:01:35.456060 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:01:35.457729 kubelet[3407]: I0510 00:01:35.457151 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:01:35.457729 kubelet[3407]: I0510 00:01:35.457207 3407 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:01:35.457729 kubelet[3407]: I0510 00:01:35.457229 3407 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:01:35.457729 kubelet[3407]: E0510 00:01:35.457282 3407 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:01:35.465993 kubelet[3407]: I0510 00:01:35.465963 3407 factory.go:221] Registration of the systemd container factory successfully May 10 00:01:35.466214 kubelet[3407]: I0510 00:01:35.466196 3407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:01:35.473845 kubelet[3407]: I0510 00:01:35.473809 3407 factory.go:221] Registration of the containerd container factory successfully May 10 00:01:35.493222 kubelet[3407]: E0510 00:01:35.493185 3407 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:01:35.534363 kubelet[3407]: I0510 00:01:35.534308 3407 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:01:35.534363 kubelet[3407]: I0510 00:01:35.534340 3407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:01:35.534363 kubelet[3407]: I0510 00:01:35.534362 3407 state_mem.go:36] "Initialized new in-memory state store" May 10 00:01:35.534535 kubelet[3407]: I0510 00:01:35.534511 3407 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:01:35.534535 kubelet[3407]: I0510 00:01:35.534521 3407 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:01:35.534589 kubelet[3407]: I0510 00:01:35.534540 3407 policy_none.go:49] "None policy: Start" May 10 00:01:35.535441 kubelet[3407]: I0510 00:01:35.535419 3407 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:01:35.535441 kubelet[3407]: I0510 00:01:35.535446 3407 state_mem.go:35] "Initializing new in-memory state store" May 10 00:01:35.535597 kubelet[3407]: I0510 00:01:35.535577 3407 state_mem.go:75] "Updated machine memory state" May 10 00:01:35.536745 kubelet[3407]: I0510 00:01:35.536656 3407 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:01:35.536886 kubelet[3407]: I0510 00:01:35.536849 3407 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:01:35.537795 kubelet[3407]: I0510 00:01:35.536948 3407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:01:35.542940 kubelet[3407]: I0510 00:01:35.542863 3407 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.557741 kubelet[3407]: I0510 00:01:35.557669 3407 topology_manager.go:215] "Topology Admit Handler" podUID="12fa05c2ab93445bc4e27684fe2fc34b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.557859 kubelet[3407]: I0510 00:01:35.557797 3407 topology_manager.go:215] "Topology Admit Handler" podUID="2a372f997e0faad46630de7686548231" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.557859 kubelet[3407]: I0510 00:01:35.557833 3407 topology_manager.go:215] "Topology Admit Handler" podUID="8b177cba605348135042bcc6af3fc792" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.562559 kubelet[3407]: I0510 00:01:35.562516 3407 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.562743 kubelet[3407]: I0510 00:01:35.562605 3407 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.574256 kubelet[3407]: W0510 00:01:35.574221 3407 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:01:35.574407 kubelet[3407]: E0510 00:01:35.574285 3407 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-n-4cc30cd86c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.578854 kubelet[3407]: W0510 00:01:35.578598 3407 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:01:35.578854 kubelet[3407]: E0510 00:01:35.578647 3407 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.580470 kubelet[3407]: W0510 00:01:35.580411 3407 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:01:35.580625 kubelet[3407]: E0510 00:01:35.580564 3407 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.651492 kubelet[3407]: I0510 00:01:35.651459 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.651873 kubelet[3407]: I0510 00:01:35.651665 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b177cba605348135042bcc6af3fc792-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-4cc30cd86c\" (UID: \"8b177cba605348135042bcc6af3fc792\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.651873 kubelet[3407]: I0510 00:01:35.651689 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12fa05c2ab93445bc4e27684fe2fc34b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" (UID: \"12fa05c2ab93445bc4e27684fe2fc34b\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.651873 kubelet[3407]: I0510 00:01:35.651728 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12fa05c2ab93445bc4e27684fe2fc34b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" (UID: \"12fa05c2ab93445bc4e27684fe2fc34b\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.651873 kubelet[3407]: I0510 00:01:35.651758 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.651873 kubelet[3407]: I0510 00:01:35.651782 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.652025 kubelet[3407]: I0510 00:01:35.651829 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12fa05c2ab93445bc4e27684fe2fc34b-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" (UID: \"12fa05c2ab93445bc4e27684fe2fc34b\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.652025 kubelet[3407]: I0510 00:01:35.651846 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:35.652142 kubelet[3407]: I0510 00:01:35.652108 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a372f997e0faad46630de7686548231-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4cc30cd86c\" (UID: \"2a372f997e0faad46630de7686548231\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:36.429065 kubelet[3407]: I0510 00:01:36.428557 3407 apiserver.go:52] "Watching apiserver" May 10 00:01:36.443865 kubelet[3407]: I0510 00:01:36.443837 3407 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:01:36.539981 kubelet[3407]: W0510 00:01:36.537763 3407 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:01:36.539981 kubelet[3407]: E0510 00:01:36.537835 3407 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-4cc30cd86c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" May 10 00:01:36.545288 kubelet[3407]: I0510 00:01:36.545142 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4cc30cd86c" podStartSLOduration=2.545124087 podStartE2EDuration="2.545124087s" podCreationTimestamp="2025-05-10 00:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:36.510015394 +0000 UTC m=+1.165153226" watchObservedRunningTime="2025-05-10 00:01:36.545124087 +0000 UTC m=+1.200261879" May 10 00:01:36.561277 kubelet[3407]: I0510 00:01:36.561207 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4cc30cd86c" podStartSLOduration=1.5611885330000002 podStartE2EDuration="1.561188533s" podCreationTimestamp="2025-05-10 00:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:36.547995088 +0000 UTC m=+1.203132880" watchObservedRunningTime="2025-05-10 00:01:36.561188533 +0000 UTC m=+1.216326365" May 10 00:01:36.590077 kubelet[3407]: I0510 00:01:36.589896 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4cc30cd86c" podStartSLOduration=2.589877863 podStartE2EDuration="2.589877863s" podCreationTimestamp="2025-05-10 00:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:36.561770093 +0000 UTC m=+1.216907925" watchObservedRunningTime="2025-05-10 00:01:36.589877863 +0000 UTC m=+1.245015655" May 10 00:01:40.496833 sudo[2439]: pam_unix(sudo:session): session closed for user root May 10 00:01:40.572643 sshd[2435]: pam_unix(sshd:session): session closed for user core May 10 00:01:40.577084 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:43450.service: Deactivated successfully. May 10 00:01:40.581124 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:01:40.582337 systemd-logind[1765]: Session 9 logged out. Waiting for processes to exit. May 10 00:01:40.583670 systemd-logind[1765]: Removed session 9. May 10 00:01:49.701114 kubelet[3407]: I0510 00:01:49.701048 3407 topology_manager.go:215] "Topology Admit Handler" podUID="150f483f-292b-4946-94c0-959183760c4e" podNamespace="kube-system" podName="kube-proxy-59c2d" May 10 00:01:49.718225 kubelet[3407]: I0510 00:01:49.718099 3407 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:01:49.719564 containerd[1794]: time="2025-05-10T00:01:49.719438074Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:01:49.720114 kubelet[3407]: I0510 00:01:49.719730 3407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:01:49.751572 kubelet[3407]: I0510 00:01:49.751508 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zfv\" (UniqueName: \"kubernetes.io/projected/150f483f-292b-4946-94c0-959183760c4e-kube-api-access-l6zfv\") pod \"kube-proxy-59c2d\" (UID: \"150f483f-292b-4946-94c0-959183760c4e\") " pod="kube-system/kube-proxy-59c2d" May 10 00:01:49.751739 kubelet[3407]: I0510 00:01:49.751614 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/150f483f-292b-4946-94c0-959183760c4e-kube-proxy\") pod \"kube-proxy-59c2d\" (UID: \"150f483f-292b-4946-94c0-959183760c4e\") " pod="kube-system/kube-proxy-59c2d" May 10 00:01:49.751739 kubelet[3407]: I0510 00:01:49.751680 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/150f483f-292b-4946-94c0-959183760c4e-lib-modules\") pod \"kube-proxy-59c2d\" (UID: \"150f483f-292b-4946-94c0-959183760c4e\") " pod="kube-system/kube-proxy-59c2d" May 10 00:01:49.751810 kubelet[3407]: I0510 00:01:49.751701 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/150f483f-292b-4946-94c0-959183760c4e-xtables-lock\") pod \"kube-proxy-59c2d\" (UID: \"150f483f-292b-4946-94c0-959183760c4e\") " pod="kube-system/kube-proxy-59c2d" May 10 00:01:49.868525 kubelet[3407]: I0510 00:01:49.868456 3407 topology_manager.go:215] "Topology Admit Handler" podUID="e4f274ac-1f73-4f83-92dc-91095ff14e89" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-gmcw2" May 10 00:01:49.879127 kubelet[3407]: W0510 00:01:49.879078 3407 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:49.879127 kubelet[3407]: E0510 00:01:49.879123 3407 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:49.879286 kubelet[3407]: W0510 00:01:49.879163 3407 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:49.879286 kubelet[3407]: E0510 00:01:49.879175 3407 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:49.889818 kubelet[3407]: E0510 00:01:49.889770 3407 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 10 00:01:49.889818 kubelet[3407]: E0510 00:01:49.889804 3407 projected.go:200] Error preparing data for projected volume kube-api-access-l6zfv for pod kube-system/kube-proxy-59c2d: configmap "kube-root-ca.crt" not found May 10 00:01:49.889979 kubelet[3407]: E0510 00:01:49.889864 3407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/150f483f-292b-4946-94c0-959183760c4e-kube-api-access-l6zfv podName:150f483f-292b-4946-94c0-959183760c4e nodeName:}" failed. No retries permitted until 2025-05-10 00:01:50.389842613 +0000 UTC m=+15.044980445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l6zfv" (UniqueName: "kubernetes.io/projected/150f483f-292b-4946-94c0-959183760c4e-kube-api-access-l6zfv") pod "kube-proxy-59c2d" (UID: "150f483f-292b-4946-94c0-959183760c4e") : configmap "kube-root-ca.crt" not found May 10 00:01:49.953002 kubelet[3407]: I0510 00:01:49.952828 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4f274ac-1f73-4f83-92dc-91095ff14e89-var-lib-calico\") pod \"tigera-operator-797db67f8-gmcw2\" (UID: \"e4f274ac-1f73-4f83-92dc-91095ff14e89\") " pod="tigera-operator/tigera-operator-797db67f8-gmcw2" May 10 00:01:49.953002 kubelet[3407]: I0510 00:01:49.952917 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5br5k\" (UniqueName: \"kubernetes.io/projected/e4f274ac-1f73-4f83-92dc-91095ff14e89-kube-api-access-5br5k\") pod \"tigera-operator-797db67f8-gmcw2\" (UID: \"e4f274ac-1f73-4f83-92dc-91095ff14e89\") " pod="tigera-operator/tigera-operator-797db67f8-gmcw2" May 10 00:01:50.610002 containerd[1794]: time="2025-05-10T00:01:50.609946257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59c2d,Uid:150f483f-292b-4946-94c0-959183760c4e,Namespace:kube-system,Attempt:0,}" May 10 00:01:50.651683 containerd[1794]: time="2025-05-10T00:01:50.651273111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:50.651683 containerd[1794]: time="2025-05-10T00:01:50.651335631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:50.651683 containerd[1794]: time="2025-05-10T00:01:50.651360711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:50.651683 containerd[1794]: time="2025-05-10T00:01:50.651455991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:50.684772 containerd[1794]: time="2025-05-10T00:01:50.684694242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59c2d,Uid:150f483f-292b-4946-94c0-959183760c4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3402e8c32a36c2d3bd41d652f2fcd25e8a6451cc09daea2dea177a83f29c900a\"" May 10 00:01:50.688096 containerd[1794]: time="2025-05-10T00:01:50.688066203Z" level=info msg="CreateContainer within sandbox \"3402e8c32a36c2d3bd41d652f2fcd25e8a6451cc09daea2dea177a83f29c900a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:01:50.728143 containerd[1794]: time="2025-05-10T00:01:50.728088657Z" level=info msg="CreateContainer within sandbox \"3402e8c32a36c2d3bd41d652f2fcd25e8a6451cc09daea2dea177a83f29c900a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e61954d1c2cb01b8bc81339524394c8ad3ae47335d893a834dc59eebc5438af4\"" May 10 00:01:50.729647 containerd[1794]: time="2025-05-10T00:01:50.729540297Z" level=info msg="StartContainer for \"e61954d1c2cb01b8bc81339524394c8ad3ae47335d893a834dc59eebc5438af4\"" May 10 00:01:50.786910 containerd[1794]: time="2025-05-10T00:01:50.786867476Z" level=info msg="StartContainer for \"e61954d1c2cb01b8bc81339524394c8ad3ae47335d893a834dc59eebc5438af4\" returns successfully" May 10 00:01:51.076917 containerd[1794]: time="2025-05-10T00:01:51.076849854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gmcw2,Uid:e4f274ac-1f73-4f83-92dc-91095ff14e89,Namespace:tigera-operator,Attempt:0,}" May 10 00:01:51.121975 containerd[1794]: time="2025-05-10T00:01:51.121866630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:51.121975 containerd[1794]: time="2025-05-10T00:01:51.121920190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:51.121975 containerd[1794]: time="2025-05-10T00:01:51.121931750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:51.122760 containerd[1794]: time="2025-05-10T00:01:51.122677950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:51.164401 containerd[1794]: time="2025-05-10T00:01:51.164349644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gmcw2,Uid:e4f274ac-1f73-4f83-92dc-91095ff14e89,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1fd276acdf96035e3d4dc3f5420724b471c367114e8d3a2386f419918ffa6b56\"" May 10 00:01:51.166349 containerd[1794]: time="2025-05-10T00:01:51.166315445Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 00:01:51.544949 kubelet[3407]: I0510 00:01:51.544882 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59c2d" podStartSLOduration=2.544849812 podStartE2EDuration="2.544849812s" podCreationTimestamp="2025-05-10 00:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:51.544122772 +0000 UTC m=+16.199260604" watchObservedRunningTime="2025-05-10 00:01:51.544849812 +0000 UTC m=+16.199987644" May 10 00:01:53.664383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479999876.mount: Deactivated successfully. May 10 00:01:54.909930 containerd[1794]: time="2025-05-10T00:01:54.909856229Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:54.913391 containerd[1794]: time="2025-05-10T00:01:54.913206390Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 10 00:01:54.917173 containerd[1794]: time="2025-05-10T00:01:54.917111592Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:54.924432 containerd[1794]: time="2025-05-10T00:01:54.924325674Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:54.925507 containerd[1794]: time="2025-05-10T00:01:54.925123635Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 3.758773229s" May 10 00:01:54.925507 containerd[1794]: time="2025-05-10T00:01:54.925161355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 10 00:01:54.928001 containerd[1794]: time="2025-05-10T00:01:54.927974235Z" level=info msg="CreateContainer within sandbox \"1fd276acdf96035e3d4dc3f5420724b471c367114e8d3a2386f419918ffa6b56\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 00:01:54.971211 containerd[1794]: time="2025-05-10T00:01:54.971157890Z" level=info msg="CreateContainer within sandbox \"1fd276acdf96035e3d4dc3f5420724b471c367114e8d3a2386f419918ffa6b56\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"764a5bb3febc1db45eec6670b8e37e2de0f9c53ab9529209d6fb4185b744151d\"" May 10 00:01:54.972968 containerd[1794]: time="2025-05-10T00:01:54.972749091Z" level=info msg="StartContainer for \"764a5bb3febc1db45eec6670b8e37e2de0f9c53ab9529209d6fb4185b744151d\"" May 10 00:01:55.028925 containerd[1794]: time="2025-05-10T00:01:55.028791110Z" level=info msg="StartContainer for \"764a5bb3febc1db45eec6670b8e37e2de0f9c53ab9529209d6fb4185b744151d\" returns successfully" May 10 00:01:59.032281 kubelet[3407]: I0510 00:01:59.032129 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-gmcw2" podStartSLOduration=6.271628283 podStartE2EDuration="10.032074274s" podCreationTimestamp="2025-05-10 00:01:49 +0000 UTC" firstStartedPulling="2025-05-10 00:01:51.165767044 +0000 UTC m=+15.820904836" lastFinishedPulling="2025-05-10 00:01:54.926212995 +0000 UTC m=+19.581350827" observedRunningTime="2025-05-10 00:01:55.560706849 +0000 UTC m=+20.215844681" watchObservedRunningTime="2025-05-10 00:01:59.032074274 +0000 UTC m=+23.687212106" May 10 00:01:59.033060 kubelet[3407]: I0510 00:01:59.032299 3407 topology_manager.go:215] "Topology Admit Handler" podUID="571fc4c1-4de2-4fae-98fa-2872093bcfa3" podNamespace="calico-system" podName="calico-typha-f9787c7d-7nbc7" May 10 00:01:59.043356 kubelet[3407]: W0510 00:01:59.042410 3407 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:59.043356 kubelet[3407]: E0510 00:01:59.042452 3407 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:59.043356 kubelet[3407]: W0510 00:01:59.042997 3407 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:59.043356 kubelet[3407]: E0510 00:01:59.043022 3407 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:59.043356 kubelet[3407]: W0510 00:01:59.043054 3407 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:59.044922 kubelet[3407]: E0510 00:01:59.043063 3407 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-n-4cc30cd86c" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-n-4cc30cd86c' and this object May 10 00:01:59.116427 kubelet[3407]: I0510 00:01:59.115554 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/571fc4c1-4de2-4fae-98fa-2872093bcfa3-tigera-ca-bundle\") pod \"calico-typha-f9787c7d-7nbc7\" (UID: \"571fc4c1-4de2-4fae-98fa-2872093bcfa3\") " pod="calico-system/calico-typha-f9787c7d-7nbc7" May 10 00:01:59.116427 kubelet[3407]: I0510 00:01:59.115598 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/571fc4c1-4de2-4fae-98fa-2872093bcfa3-typha-certs\") pod \"calico-typha-f9787c7d-7nbc7\" (UID: \"571fc4c1-4de2-4fae-98fa-2872093bcfa3\") " pod="calico-system/calico-typha-f9787c7d-7nbc7" May 10 00:01:59.116427 kubelet[3407]: I0510 00:01:59.115619 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpj96\" (UniqueName: \"kubernetes.io/projected/571fc4c1-4de2-4fae-98fa-2872093bcfa3-kube-api-access-dpj96\") pod \"calico-typha-f9787c7d-7nbc7\" (UID: \"571fc4c1-4de2-4fae-98fa-2872093bcfa3\") " pod="calico-system/calico-typha-f9787c7d-7nbc7" May 10 00:01:59.157126 kubelet[3407]: I0510 00:01:59.157075 3407 topology_manager.go:215] "Topology Admit Handler" podUID="7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd" podNamespace="calico-system" podName="calico-node-wlwq5" May 10 00:01:59.216812 kubelet[3407]: I0510 00:01:59.216371 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-xtables-lock\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.216812 kubelet[3407]: I0510 00:01:59.216418 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-flexvol-driver-host\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.216812 kubelet[3407]: I0510 00:01:59.216439 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-var-lib-calico\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.216812 kubelet[3407]: I0510 00:01:59.216455 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-cni-log-dir\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.216812 kubelet[3407]: I0510 00:01:59.216493 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-policysync\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.217050 kubelet[3407]: I0510 00:01:59.216508 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-tigera-ca-bundle\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.217050 kubelet[3407]: I0510 00:01:59.216525 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-cni-bin-dir\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.217050 kubelet[3407]: I0510 00:01:59.216547 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fxb\" (UniqueName: \"kubernetes.io/projected/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-kube-api-access-l2fxb\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.217050 kubelet[3407]: I0510 00:01:59.216563 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-lib-modules\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.217050 kubelet[3407]: I0510 00:01:59.216578 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-node-certs\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.217168 kubelet[3407]: I0510 00:01:59.216592 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-var-run-calico\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.217168 kubelet[3407]: I0510 00:01:59.216608 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-cni-net-dir\") pod \"calico-node-wlwq5\" (UID: \"7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd\") " pod="calico-system/calico-node-wlwq5" May 10 00:01:59.322474 kubelet[3407]: E0510 00:01:59.322177 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.322474 kubelet[3407]: W0510 00:01:59.322204 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.322474 kubelet[3407]: E0510 00:01:59.322226 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.322906 kubelet[3407]: E0510 00:01:59.322715 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.322906 kubelet[3407]: W0510 00:01:59.322757 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.322906 kubelet[3407]: E0510 00:01:59.322770 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.323466 kubelet[3407]: E0510 00:01:59.323270 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.323466 kubelet[3407]: W0510 00:01:59.323284 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.323466 kubelet[3407]: E0510 00:01:59.323296 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.327743 kubelet[3407]: E0510 00:01:59.325854 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.327743 kubelet[3407]: W0510 00:01:59.325872 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.327743 kubelet[3407]: E0510 00:01:59.325886 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.328142 kubelet[3407]: E0510 00:01:59.328124 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.328224 kubelet[3407]: W0510 00:01:59.328211 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.328295 kubelet[3407]: E0510 00:01:59.328282 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.364923 kubelet[3407]: I0510 00:01:59.364867 3407 topology_manager.go:215] "Topology Admit Handler" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" podNamespace="calico-system" podName="csi-node-driver-7wwnd" May 10 00:01:59.365183 kubelet[3407]: E0510 00:01:59.365144 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:01:59.398672 kubelet[3407]: E0510 00:01:59.398637 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.398807 kubelet[3407]: W0510 00:01:59.398692 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.398807 kubelet[3407]: E0510 00:01:59.398712 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.398940 kubelet[3407]: E0510 00:01:59.398921 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.398940 kubelet[3407]: W0510 00:01:59.398937 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.399010 kubelet[3407]: E0510 00:01:59.398948 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.399140 kubelet[3407]: E0510 00:01:59.399123 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.399140 kubelet[3407]: W0510 00:01:59.399138 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.399206 kubelet[3407]: E0510 00:01:59.399147 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.399336 kubelet[3407]: E0510 00:01:59.399319 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.399336 kubelet[3407]: W0510 00:01:59.399333 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.399393 kubelet[3407]: E0510 00:01:59.399343 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.399553 kubelet[3407]: E0510 00:01:59.399536 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.399553 kubelet[3407]: W0510 00:01:59.399551 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.399619 kubelet[3407]: E0510 00:01:59.399560 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.399807 kubelet[3407]: E0510 00:01:59.399787 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.399852 kubelet[3407]: W0510 00:01:59.399809 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.399852 kubelet[3407]: E0510 00:01:59.399820 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.399988 kubelet[3407]: E0510 00:01:59.399972 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.399988 kubelet[3407]: W0510 00:01:59.399985 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.400052 kubelet[3407]: E0510 00:01:59.399995 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.400149 kubelet[3407]: E0510 00:01:59.400134 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.400149 kubelet[3407]: W0510 00:01:59.400146 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.400213 kubelet[3407]: E0510 00:01:59.400155 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.400310 kubelet[3407]: E0510 00:01:59.400295 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.400310 kubelet[3407]: W0510 00:01:59.400308 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.400383 kubelet[3407]: E0510 00:01:59.400316 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.400460 kubelet[3407]: E0510 00:01:59.400444 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.400460 kubelet[3407]: W0510 00:01:59.400457 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.400514 kubelet[3407]: E0510 00:01:59.400466 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.400657 kubelet[3407]: E0510 00:01:59.400640 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.400689 kubelet[3407]: W0510 00:01:59.400655 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.400689 kubelet[3407]: E0510 00:01:59.400680 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.400885 kubelet[3407]: E0510 00:01:59.400857 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.400885 kubelet[3407]: W0510 00:01:59.400871 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.400960 kubelet[3407]: E0510 00:01:59.400879 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.401056 kubelet[3407]: E0510 00:01:59.401038 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.401056 kubelet[3407]: W0510 00:01:59.401053 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.401125 kubelet[3407]: E0510 00:01:59.401061 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.401202 kubelet[3407]: E0510 00:01:59.401187 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.401202 kubelet[3407]: W0510 00:01:59.401200 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.401264 kubelet[3407]: E0510 00:01:59.401208 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.401341 kubelet[3407]: E0510 00:01:59.401325 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.401341 kubelet[3407]: W0510 00:01:59.401338 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.401407 kubelet[3407]: E0510 00:01:59.401346 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.401489 kubelet[3407]: E0510 00:01:59.401473 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.401489 kubelet[3407]: W0510 00:01:59.401486 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.401552 kubelet[3407]: E0510 00:01:59.401494 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.401642 kubelet[3407]: E0510 00:01:59.401627 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.401642 kubelet[3407]: W0510 00:01:59.401640 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.401708 kubelet[3407]: E0510 00:01:59.401647 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.401805 kubelet[3407]: E0510 00:01:59.401790 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.401805 kubelet[3407]: W0510 00:01:59.401806 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.401877 kubelet[3407]: E0510 00:01:59.401814 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.401966 kubelet[3407]: E0510 00:01:59.401950 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.401966 kubelet[3407]: W0510 00:01:59.401963 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.402036 kubelet[3407]: E0510 00:01:59.401971 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.402113 kubelet[3407]: E0510 00:01:59.402098 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.402113 kubelet[3407]: W0510 00:01:59.402110 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.402165 kubelet[3407]: E0510 00:01:59.402118 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.419566 kubelet[3407]: E0510 00:01:59.419333 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.419566 kubelet[3407]: W0510 00:01:59.419416 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.419566 kubelet[3407]: E0510 00:01:59.419439 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.421060 kubelet[3407]: E0510 00:01:59.420947 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.421060 kubelet[3407]: W0510 00:01:59.420970 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.421060 kubelet[3407]: E0510 00:01:59.420984 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.422039 kubelet[3407]: E0510 00:01:59.422009 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.422254 kubelet[3407]: W0510 00:01:59.422026 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.422254 kubelet[3407]: E0510 00:01:59.422250 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.422651 kubelet[3407]: I0510 00:01:59.422282 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb-kubelet-dir\") pod \"csi-node-driver-7wwnd\" (UID: \"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb\") " pod="calico-system/csi-node-driver-7wwnd" May 10 00:01:59.423772 kubelet[3407]: E0510 00:01:59.423743 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.423772 kubelet[3407]: W0510 00:01:59.423764 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.424273 kubelet[3407]: E0510 00:01:59.423786 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.424273 kubelet[3407]: I0510 00:01:59.423811 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68zq\" (UniqueName: \"kubernetes.io/projected/a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb-kube-api-access-f68zq\") pod \"csi-node-driver-7wwnd\" (UID: \"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb\") " pod="calico-system/csi-node-driver-7wwnd" May 10 00:01:59.425128 kubelet[3407]: E0510 00:01:59.425066 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.425128 kubelet[3407]: W0510 00:01:59.425091 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.425842 kubelet[3407]: E0510 00:01:59.425225 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.426818 kubelet[3407]: E0510 00:01:59.426793 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.426818 kubelet[3407]: W0510 00:01:59.426812 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.426973 kubelet[3407]: E0510 00:01:59.426844 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.427297 kubelet[3407]: E0510 00:01:59.427056 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.427297 kubelet[3407]: W0510 00:01:59.427072 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.427636 kubelet[3407]: E0510 00:01:59.427418 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.427636 kubelet[3407]: E0510 00:01:59.427452 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.427636 kubelet[3407]: W0510 00:01:59.427464 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.427636 kubelet[3407]: E0510 00:01:59.427497 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.427636 kubelet[3407]: I0510 00:01:59.427528 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb-varrun\") pod \"csi-node-driver-7wwnd\" (UID: \"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb\") " pod="calico-system/csi-node-driver-7wwnd" May 10 00:01:59.428103 kubelet[3407]: E0510 00:01:59.428079 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.428103 kubelet[3407]: W0510 00:01:59.428098 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.428473 kubelet[3407]: E0510 00:01:59.428111 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.428473 kubelet[3407]: E0510 00:01:59.428338 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.428473 kubelet[3407]: W0510 00:01:59.428350 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.428473 kubelet[3407]: E0510 00:01:59.428360 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.428473 kubelet[3407]: I0510 00:01:59.428378 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb-registration-dir\") pod \"csi-node-driver-7wwnd\" (UID: \"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb\") " pod="calico-system/csi-node-driver-7wwnd" May 10 00:01:59.429212 kubelet[3407]: E0510 00:01:59.429082 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.429212 kubelet[3407]: W0510 00:01:59.429099 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.429212 kubelet[3407]: E0510 00:01:59.429116 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.429668 kubelet[3407]: E0510 00:01:59.429440 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.429668 kubelet[3407]: W0510 00:01:59.429451 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.429668 kubelet[3407]: E0510 00:01:59.429466 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.430021 kubelet[3407]: E0510 00:01:59.429814 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.430021 kubelet[3407]: W0510 00:01:59.429826 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.430021 kubelet[3407]: E0510 00:01:59.429840 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.430156 kubelet[3407]: E0510 00:01:59.430143 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.430306 kubelet[3407]: W0510 00:01:59.430200 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.430306 kubelet[3407]: E0510 00:01:59.430226 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.430453 kubelet[3407]: E0510 00:01:59.430440 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.430519 kubelet[3407]: W0510 00:01:59.430508 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.430586 kubelet[3407]: E0510 00:01:59.430575 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.431179 kubelet[3407]: E0510 00:01:59.430838 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.431179 kubelet[3407]: W0510 00:01:59.431136 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.431179 kubelet[3407]: E0510 00:01:59.431156 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.431651 kubelet[3407]: E0510 00:01:59.431528 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.431651 kubelet[3407]: W0510 00:01:59.431549 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.431651 kubelet[3407]: E0510 00:01:59.431562 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.431651 kubelet[3407]: I0510 00:01:59.431580 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb-socket-dir\") pod \"csi-node-driver-7wwnd\" (UID: \"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb\") " pod="calico-system/csi-node-driver-7wwnd" May 10 00:01:59.432100 kubelet[3407]: E0510 00:01:59.431797 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.432100 kubelet[3407]: W0510 00:01:59.431810 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.432100 kubelet[3407]: E0510 00:01:59.431822 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.432539 kubelet[3407]: E0510 00:01:59.432514 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.432539 kubelet[3407]: W0510 00:01:59.432532 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.432619 kubelet[3407]: E0510 00:01:59.432577 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.432835 kubelet[3407]: E0510 00:01:59.432813 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.432878 kubelet[3407]: W0510 00:01:59.432842 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.432878 kubelet[3407]: E0510 00:01:59.432854 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.533319 kubelet[3407]: E0510 00:01:59.533289 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.533591 kubelet[3407]: W0510 00:01:59.533478 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.533591 kubelet[3407]: E0510 00:01:59.533506 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.533958 kubelet[3407]: E0510 00:01:59.533855 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.533958 kubelet[3407]: W0510 00:01:59.533870 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.533958 kubelet[3407]: E0510 00:01:59.533892 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.534359 kubelet[3407]: E0510 00:01:59.534281 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.534359 kubelet[3407]: W0510 00:01:59.534294 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.534359 kubelet[3407]: E0510 00:01:59.534314 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.534517 kubelet[3407]: E0510 00:01:59.534495 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.534517 kubelet[3407]: W0510 00:01:59.534516 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.534622 kubelet[3407]: E0510 00:01:59.534540 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.534716 kubelet[3407]: E0510 00:01:59.534699 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.534780 kubelet[3407]: W0510 00:01:59.534713 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.534780 kubelet[3407]: E0510 00:01:59.534752 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.534910 kubelet[3407]: E0510 00:01:59.534894 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.534910 kubelet[3407]: W0510 00:01:59.534907 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.534973 kubelet[3407]: E0510 00:01:59.534923 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.535112 kubelet[3407]: E0510 00:01:59.535097 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.535112 kubelet[3407]: W0510 00:01:59.535110 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.535233 kubelet[3407]: E0510 00:01:59.535126 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.535300 kubelet[3407]: E0510 00:01:59.535283 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.535300 kubelet[3407]: W0510 00:01:59.535297 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.535382 kubelet[3407]: E0510 00:01:59.535314 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.535467 kubelet[3407]: E0510 00:01:59.535452 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.535467 kubelet[3407]: W0510 00:01:59.535466 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.535594 kubelet[3407]: E0510 00:01:59.535523 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.535594 kubelet[3407]: E0510 00:01:59.535593 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.535684 kubelet[3407]: W0510 00:01:59.535601 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.535768 kubelet[3407]: E0510 00:01:59.535631 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.535768 kubelet[3407]: E0510 00:01:59.535752 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.535768 kubelet[3407]: W0510 00:01:59.535761 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.535768 kubelet[3407]: E0510 00:01:59.535786 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.535944 kubelet[3407]: E0510 00:01:59.535901 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.535944 kubelet[3407]: W0510 00:01:59.535909 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.535944 kubelet[3407]: E0510 00:01:59.535926 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.536127 kubelet[3407]: E0510 00:01:59.536110 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.536127 kubelet[3407]: W0510 00:01:59.536126 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.536244 kubelet[3407]: E0510 00:01:59.536143 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.536314 kubelet[3407]: E0510 00:01:59.536296 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.536314 kubelet[3407]: W0510 00:01:59.536310 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.536393 kubelet[3407]: E0510 00:01:59.536325 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.536486 kubelet[3407]: E0510 00:01:59.536471 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.536486 kubelet[3407]: W0510 00:01:59.536484 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.536577 kubelet[3407]: E0510 00:01:59.536500 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.536663 kubelet[3407]: E0510 00:01:59.536646 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.536663 kubelet[3407]: W0510 00:01:59.536660 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.536748 kubelet[3407]: E0510 00:01:59.536675 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.536896 kubelet[3407]: E0510 00:01:59.536881 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.536896 kubelet[3407]: W0510 00:01:59.536895 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.537083 kubelet[3407]: E0510 00:01:59.536951 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.537083 kubelet[3407]: E0510 00:01:59.537036 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.537083 kubelet[3407]: W0510 00:01:59.537044 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.537083 kubelet[3407]: E0510 00:01:59.537059 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.537354 kubelet[3407]: E0510 00:01:59.537194 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.537354 kubelet[3407]: W0510 00:01:59.537202 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.537354 kubelet[3407]: E0510 00:01:59.537216 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.537605 kubelet[3407]: E0510 00:01:59.537508 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.537605 kubelet[3407]: W0510 00:01:59.537522 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.537605 kubelet[3407]: E0510 00:01:59.537546 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.537920 kubelet[3407]: E0510 00:01:59.537884 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.537920 kubelet[3407]: W0510 00:01:59.537899 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.538175 kubelet[3407]: E0510 00:01:59.538119 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.538272 kubelet[3407]: E0510 00:01:59.538261 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.538399 kubelet[3407]: W0510 00:01:59.538316 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.538399 kubelet[3407]: E0510 00:01:59.538344 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.538763 kubelet[3407]: E0510 00:01:59.538620 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.538763 kubelet[3407]: W0510 00:01:59.538632 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.538763 kubelet[3407]: E0510 00:01:59.538662 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.539042 kubelet[3407]: E0510 00:01:59.538933 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.539042 kubelet[3407]: W0510 00:01:59.538946 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.539042 kubelet[3407]: E0510 00:01:59.538978 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.539208 kubelet[3407]: E0510 00:01:59.539194 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.539336 kubelet[3407]: W0510 00:01:59.539253 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.539408 kubelet[3407]: E0510 00:01:59.539394 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.539622 kubelet[3407]: E0510 00:01:59.539610 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.539751 kubelet[3407]: W0510 00:01:59.539675 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.539751 kubelet[3407]: E0510 00:01:59.539705 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.539901 kubelet[3407]: E0510 00:01:59.539877 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.539901 kubelet[3407]: W0510 00:01:59.539897 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.539991 kubelet[3407]: E0510 00:01:59.539915 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.540071 kubelet[3407]: E0510 00:01:59.540054 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.540071 kubelet[3407]: W0510 00:01:59.540068 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.540130 kubelet[3407]: E0510 00:01:59.540083 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.540252 kubelet[3407]: E0510 00:01:59.540237 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.540252 kubelet[3407]: W0510 00:01:59.540251 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.540321 kubelet[3407]: E0510 00:01:59.540265 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.540758 kubelet[3407]: E0510 00:01:59.540672 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.540758 kubelet[3407]: W0510 00:01:59.540687 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.540758 kubelet[3407]: E0510 00:01:59.540701 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.638914 kubelet[3407]: E0510 00:01:59.638743 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.638914 kubelet[3407]: W0510 00:01:59.638765 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.638914 kubelet[3407]: E0510 00:01:59.638786 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.639418 kubelet[3407]: E0510 00:01:59.639292 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.639418 kubelet[3407]: W0510 00:01:59.639306 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.639418 kubelet[3407]: E0510 00:01:59.639317 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.639739 kubelet[3407]: E0510 00:01:59.639531 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.639739 kubelet[3407]: W0510 00:01:59.639541 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.639739 kubelet[3407]: E0510 00:01:59.639552 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.639988 kubelet[3407]: E0510 00:01:59.639883 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.639988 kubelet[3407]: W0510 00:01:59.639896 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.639988 kubelet[3407]: E0510 00:01:59.639931 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.640756 kubelet[3407]: E0510 00:01:59.640581 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.640756 kubelet[3407]: W0510 00:01:59.640596 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.640756 kubelet[3407]: E0510 00:01:59.640607 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.641750 kubelet[3407]: E0510 00:01:59.641577 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.641750 kubelet[3407]: W0510 00:01:59.641630 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.641750 kubelet[3407]: E0510 00:01:59.641645 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.742747 kubelet[3407]: E0510 00:01:59.742693 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.742747 kubelet[3407]: W0510 00:01:59.742743 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.742918 kubelet[3407]: E0510 00:01:59.742762 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.742978 kubelet[3407]: E0510 00:01:59.742956 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.742978 kubelet[3407]: W0510 00:01:59.742973 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.743042 kubelet[3407]: E0510 00:01:59.742983 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.743155 kubelet[3407]: E0510 00:01:59.743138 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.743155 kubelet[3407]: W0510 00:01:59.743153 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.743214 kubelet[3407]: E0510 00:01:59.743161 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.743389 kubelet[3407]: E0510 00:01:59.743371 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.743389 kubelet[3407]: W0510 00:01:59.743385 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.743463 kubelet[3407]: E0510 00:01:59.743394 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.743583 kubelet[3407]: E0510 00:01:59.743568 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.743583 kubelet[3407]: W0510 00:01:59.743581 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.743650 kubelet[3407]: E0510 00:01:59.743590 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.743796 kubelet[3407]: E0510 00:01:59.743779 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.743796 kubelet[3407]: W0510 00:01:59.743795 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.743861 kubelet[3407]: E0510 00:01:59.743805 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.844418 kubelet[3407]: E0510 00:01:59.844301 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.844418 kubelet[3407]: W0510 00:01:59.844324 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.844418 kubelet[3407]: E0510 00:01:59.844342 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.844646 kubelet[3407]: E0510 00:01:59.844530 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.844646 kubelet[3407]: W0510 00:01:59.844539 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.844646 kubelet[3407]: E0510 00:01:59.844549 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.844752 kubelet[3407]: E0510 00:01:59.844693 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.844752 kubelet[3407]: W0510 00:01:59.844702 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.844752 kubelet[3407]: E0510 00:01:59.844710 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.844908 kubelet[3407]: E0510 00:01:59.844891 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.844908 kubelet[3407]: W0510 00:01:59.844906 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.844976 kubelet[3407]: E0510 00:01:59.844916 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.845082 kubelet[3407]: E0510 00:01:59.845066 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.845082 kubelet[3407]: W0510 00:01:59.845080 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.845144 kubelet[3407]: E0510 00:01:59.845091 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.845255 kubelet[3407]: E0510 00:01:59.845238 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.845255 kubelet[3407]: W0510 00:01:59.845252 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.845313 kubelet[3407]: E0510 00:01:59.845261 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.946256 kubelet[3407]: E0510 00:01:59.946222 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.946256 kubelet[3407]: W0510 00:01:59.946247 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.946256 kubelet[3407]: E0510 00:01:59.946265 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.946483 kubelet[3407]: E0510 00:01:59.946463 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.946483 kubelet[3407]: W0510 00:01:59.946478 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.946547 kubelet[3407]: E0510 00:01:59.946488 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.946658 kubelet[3407]: E0510 00:01:59.946643 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.946658 kubelet[3407]: W0510 00:01:59.946656 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.946762 kubelet[3407]: E0510 00:01:59.946665 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.946857 kubelet[3407]: E0510 00:01:59.946840 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.946857 kubelet[3407]: W0510 00:01:59.946854 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.946917 kubelet[3407]: E0510 00:01:59.946863 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.947035 kubelet[3407]: E0510 00:01:59.947019 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.947035 kubelet[3407]: W0510 00:01:59.947033 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.947110 kubelet[3407]: E0510 00:01:59.947041 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.947198 kubelet[3407]: E0510 00:01:59.947184 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.947198 kubelet[3407]: W0510 00:01:59.947196 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.947255 kubelet[3407]: E0510 00:01:59.947205 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.968230 kubelet[3407]: E0510 00:01:59.968129 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.968230 kubelet[3407]: W0510 00:01:59.968152 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.968230 kubelet[3407]: E0510 00:01:59.968169 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:59.971164 kubelet[3407]: E0510 00:01:59.971032 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:59.971164 kubelet[3407]: W0510 00:01:59.971055 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:59.971164 kubelet[3407]: E0510 00:01:59.971070 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.048363 kubelet[3407]: E0510 00:02:00.048328 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.048363 kubelet[3407]: W0510 00:02:00.048354 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.048363 kubelet[3407]: E0510 00:02:00.048371 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.048855 kubelet[3407]: E0510 00:02:00.048553 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.048855 kubelet[3407]: W0510 00:02:00.048566 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.048855 kubelet[3407]: E0510 00:02:00.048575 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.048855 kubelet[3407]: E0510 00:02:00.048713 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.048855 kubelet[3407]: W0510 00:02:00.048751 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.048855 kubelet[3407]: E0510 00:02:00.048766 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.048998 kubelet[3407]: E0510 00:02:00.048923 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.048998 kubelet[3407]: W0510 00:02:00.048931 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.048998 kubelet[3407]: E0510 00:02:00.048938 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.149544 kubelet[3407]: E0510 00:02:00.149509 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.149544 kubelet[3407]: W0510 00:02:00.149535 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.149544 kubelet[3407]: E0510 00:02:00.149554 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.149855 kubelet[3407]: E0510 00:02:00.149836 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.149855 kubelet[3407]: W0510 00:02:00.149853 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.149929 kubelet[3407]: E0510 00:02:00.149864 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.150104 kubelet[3407]: E0510 00:02:00.150087 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.150104 kubelet[3407]: W0510 00:02:00.150102 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.150169 kubelet[3407]: E0510 00:02:00.150112 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.150284 kubelet[3407]: E0510 00:02:00.150268 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.150284 kubelet[3407]: W0510 00:02:00.150283 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.150340 kubelet[3407]: E0510 00:02:00.150292 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.217171 kubelet[3407]: E0510 00:02:00.217027 3407 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 10 00:02:00.217171 kubelet[3407]: E0510 00:02:00.217129 3407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/571fc4c1-4de2-4fae-98fa-2872093bcfa3-typha-certs podName:571fc4c1-4de2-4fae-98fa-2872093bcfa3 nodeName:}" failed. No retries permitted until 2025-05-10 00:02:00.717111688 +0000 UTC m=+25.372249480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/571fc4c1-4de2-4fae-98fa-2872093bcfa3-typha-certs") pod "calico-typha-f9787c7d-7nbc7" (UID: "571fc4c1-4de2-4fae-98fa-2872093bcfa3") : failed to sync secret cache: timed out waiting for the condition May 10 00:02:00.224324 kubelet[3407]: E0510 00:02:00.224290 3407 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.224324 kubelet[3407]: E0510 00:02:00.224326 3407 projected.go:200] Error preparing data for projected volume kube-api-access-dpj96 for pod calico-system/calico-typha-f9787c7d-7nbc7: failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.224435 kubelet[3407]: E0510 00:02:00.224374 3407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/571fc4c1-4de2-4fae-98fa-2872093bcfa3-kube-api-access-dpj96 podName:571fc4c1-4de2-4fae-98fa-2872093bcfa3 nodeName:}" failed. No retries permitted until 2025-05-10 00:02:00.72435821 +0000 UTC m=+25.379496002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dpj96" (UniqueName: "kubernetes.io/projected/571fc4c1-4de2-4fae-98fa-2872093bcfa3-kube-api-access-dpj96") pod "calico-typha-f9787c7d-7nbc7" (UID: "571fc4c1-4de2-4fae-98fa-2872093bcfa3") : failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.251501 kubelet[3407]: E0510 00:02:00.251383 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.251501 kubelet[3407]: W0510 00:02:00.251406 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.251501 kubelet[3407]: E0510 00:02:00.251424 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.251687 kubelet[3407]: E0510 00:02:00.251621 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.251687 kubelet[3407]: W0510 00:02:00.251631 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.251687 kubelet[3407]: E0510 00:02:00.251639 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.251877 kubelet[3407]: E0510 00:02:00.251839 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.251877 kubelet[3407]: W0510 00:02:00.251849 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.251877 kubelet[3407]: E0510 00:02:00.251859 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.252036 kubelet[3407]: E0510 00:02:00.252019 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.252036 kubelet[3407]: W0510 00:02:00.252033 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.252090 kubelet[3407]: E0510 00:02:00.252042 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.339406 kubelet[3407]: E0510 00:02:00.339071 3407 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.339406 kubelet[3407]: E0510 00:02:00.339109 3407 projected.go:200] Error preparing data for projected volume kube-api-access-l2fxb for pod calico-system/calico-node-wlwq5: failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.339406 kubelet[3407]: E0510 00:02:00.339172 3407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-kube-api-access-l2fxb podName:7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd nodeName:}" failed. No retries permitted until 2025-05-10 00:02:00.839153606 +0000 UTC m=+25.494291398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2fxb" (UniqueName: "kubernetes.io/projected/7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd-kube-api-access-l2fxb") pod "calico-node-wlwq5" (UID: "7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd") : failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.353080 kubelet[3407]: E0510 00:02:00.352961 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.353080 kubelet[3407]: W0510 00:02:00.352985 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.353080 kubelet[3407]: E0510 00:02:00.353002 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.353283 kubelet[3407]: E0510 00:02:00.353222 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.353283 kubelet[3407]: W0510 00:02:00.353231 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.353283 kubelet[3407]: E0510 00:02:00.353240 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.353418 kubelet[3407]: E0510 00:02:00.353402 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.353418 kubelet[3407]: W0510 00:02:00.353416 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.353485 kubelet[3407]: E0510 00:02:00.353425 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.353588 kubelet[3407]: E0510 00:02:00.353573 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.353588 kubelet[3407]: W0510 00:02:00.353586 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.353643 kubelet[3407]: E0510 00:02:00.353594 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.454741 kubelet[3407]: E0510 00:02:00.454697 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.454741 kubelet[3407]: W0510 00:02:00.454737 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.454915 kubelet[3407]: E0510 00:02:00.454759 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.455910 kubelet[3407]: E0510 00:02:00.455884 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.455910 kubelet[3407]: W0510 00:02:00.455906 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.455994 kubelet[3407]: E0510 00:02:00.455919 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.456922 kubelet[3407]: E0510 00:02:00.456767 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.456922 kubelet[3407]: W0510 00:02:00.456785 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.456922 kubelet[3407]: E0510 00:02:00.456799 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.457254 kubelet[3407]: E0510 00:02:00.457159 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.457687 kubelet[3407]: W0510 00:02:00.457328 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.457687 kubelet[3407]: E0510 00:02:00.457387 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.457815 kubelet[3407]: E0510 00:02:00.457580 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:02:00.548495 kubelet[3407]: E0510 00:02:00.548390 3407 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.549569 kubelet[3407]: E0510 00:02:00.548633 3407 projected.go:200] Error preparing data for projected volume kube-api-access-f68zq for pod calico-system/csi-node-driver-7wwnd: failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.549569 kubelet[3407]: E0510 00:02:00.548703 3407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb-kube-api-access-f68zq podName:a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb nodeName:}" failed. No retries permitted until 2025-05-10 00:02:01.048679272 +0000 UTC m=+25.703817104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f68zq" (UniqueName: "kubernetes.io/projected/a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb-kube-api-access-f68zq") pod "csi-node-driver-7wwnd" (UID: "a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb") : failed to sync configmap cache: timed out waiting for the condition May 10 00:02:00.558316 kubelet[3407]: E0510 00:02:00.558187 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.558316 kubelet[3407]: W0510 00:02:00.558210 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.558316 kubelet[3407]: E0510 00:02:00.558228 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.558659 kubelet[3407]: E0510 00:02:00.558612 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.558659 kubelet[3407]: W0510 00:02:00.558630 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.558659 kubelet[3407]: E0510 00:02:00.558643 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.558937 kubelet[3407]: E0510 00:02:00.558919 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.558937 kubelet[3407]: W0510 00:02:00.558934 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.559003 kubelet[3407]: E0510 00:02:00.558945 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.559123 kubelet[3407]: E0510 00:02:00.559106 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.559123 kubelet[3407]: W0510 00:02:00.559120 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.559184 kubelet[3407]: E0510 00:02:00.559129 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.660488 kubelet[3407]: E0510 00:02:00.660433 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.660488 kubelet[3407]: W0510 00:02:00.660460 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.660488 kubelet[3407]: E0510 00:02:00.660479 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.660981 kubelet[3407]: E0510 00:02:00.660648 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.660981 kubelet[3407]: W0510 00:02:00.660663 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.660981 kubelet[3407]: E0510 00:02:00.660672 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.660981 kubelet[3407]: E0510 00:02:00.660854 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.660981 kubelet[3407]: W0510 00:02:00.660863 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.660981 kubelet[3407]: E0510 00:02:00.660897 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.661248 kubelet[3407]: E0510 00:02:00.661076 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.661248 kubelet[3407]: W0510 00:02:00.661087 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.661248 kubelet[3407]: E0510 00:02:00.661095 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.762449 kubelet[3407]: E0510 00:02:00.762328 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.762449 kubelet[3407]: W0510 00:02:00.762352 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.762449 kubelet[3407]: E0510 00:02:00.762369 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.762706 kubelet[3407]: E0510 00:02:00.762567 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.762706 kubelet[3407]: W0510 00:02:00.762577 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.762706 kubelet[3407]: E0510 00:02:00.762586 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.762818 kubelet[3407]: E0510 00:02:00.762793 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.762818 kubelet[3407]: W0510 00:02:00.762803 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.762818 kubelet[3407]: E0510 00:02:00.762812 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.763097 kubelet[3407]: E0510 00:02:00.762975 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.763097 kubelet[3407]: W0510 00:02:00.762992 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.763097 kubelet[3407]: E0510 00:02:00.763004 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.763395 kubelet[3407]: E0510 00:02:00.763274 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.763395 kubelet[3407]: W0510 00:02:00.763295 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.763395 kubelet[3407]: E0510 00:02:00.763313 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.763557 kubelet[3407]: E0510 00:02:00.763544 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.763609 kubelet[3407]: W0510 00:02:00.763598 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.763773 kubelet[3407]: E0510 00:02:00.763667 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.763888 kubelet[3407]: E0510 00:02:00.763876 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.763943 kubelet[3407]: W0510 00:02:00.763933 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.764010 kubelet[3407]: E0510 00:02:00.763999 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.764247 kubelet[3407]: E0510 00:02:00.764233 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.764408 kubelet[3407]: W0510 00:02:00.764314 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.764408 kubelet[3407]: E0510 00:02:00.764354 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.764679 kubelet[3407]: E0510 00:02:00.764588 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.764679 kubelet[3407]: W0510 00:02:00.764600 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.764679 kubelet[3407]: E0510 00:02:00.764610 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.765006 kubelet[3407]: E0510 00:02:00.764925 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.765006 kubelet[3407]: W0510 00:02:00.764937 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.765006 kubelet[3407]: E0510 00:02:00.764948 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.765210 kubelet[3407]: E0510 00:02:00.765198 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.765347 kubelet[3407]: W0510 00:02:00.765255 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.765347 kubelet[3407]: E0510 00:02:00.765270 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.765532 kubelet[3407]: E0510 00:02:00.765520 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.765595 kubelet[3407]: W0510 00:02:00.765583 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.765898 kubelet[3407]: E0510 00:02:00.765648 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.769837 kubelet[3407]: E0510 00:02:00.769804 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.769837 kubelet[3407]: W0510 00:02:00.769828 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.770805 kubelet[3407]: E0510 00:02:00.769843 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.771590 kubelet[3407]: E0510 00:02:00.771574 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.772643 kubelet[3407]: W0510 00:02:00.772589 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.772643 kubelet[3407]: E0510 00:02:00.772616 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.837597 containerd[1794]: time="2025-05-10T00:02:00.837495843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9787c7d-7nbc7,Uid:571fc4c1-4de2-4fae-98fa-2872093bcfa3,Namespace:calico-system,Attempt:0,}" May 10 00:02:00.863977 kubelet[3407]: E0510 00:02:00.863946 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.864126 kubelet[3407]: W0510 00:02:00.864110 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.864315 kubelet[3407]: E0510 00:02:00.864193 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.864540 kubelet[3407]: E0510 00:02:00.864468 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.864540 kubelet[3407]: W0510 00:02:00.864482 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.864540 kubelet[3407]: E0510 00:02:00.864501 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.864804 kubelet[3407]: E0510 00:02:00.864768 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.864804 kubelet[3407]: W0510 00:02:00.864803 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.864883 kubelet[3407]: E0510 00:02:00.864819 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.865013 kubelet[3407]: E0510 00:02:00.864997 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.865052 kubelet[3407]: W0510 00:02:00.865024 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.865052 kubelet[3407]: E0510 00:02:00.865035 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.865213 kubelet[3407]: E0510 00:02:00.865197 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.865213 kubelet[3407]: W0510 00:02:00.865211 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.865270 kubelet[3407]: E0510 00:02:00.865220 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.865426 kubelet[3407]: E0510 00:02:00.865405 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.865466 kubelet[3407]: W0510 00:02:00.865431 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.865466 kubelet[3407]: E0510 00:02:00.865442 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.871107 kubelet[3407]: E0510 00:02:00.871040 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.871107 kubelet[3407]: W0510 00:02:00.871055 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.871107 kubelet[3407]: E0510 00:02:00.871069 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.885703 containerd[1794]: time="2025-05-10T00:02:00.885234978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:00.885973 containerd[1794]: time="2025-05-10T00:02:00.885589258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:00.885973 containerd[1794]: time="2025-05-10T00:02:00.885607258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:00.886220 containerd[1794]: time="2025-05-10T00:02:00.886137938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:00.931116 containerd[1794]: time="2025-05-10T00:02:00.931050713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9787c7d-7nbc7,Uid:571fc4c1-4de2-4fae-98fa-2872093bcfa3,Namespace:calico-system,Attempt:0,} returns sandbox id \"0674b7706cc48fa1a7c4e30c1c6e05033cb8af0a0801d93ecaef43114148f8f7\"" May 10 00:02:00.933390 containerd[1794]: time="2025-05-10T00:02:00.932713833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 00:02:00.965611 kubelet[3407]: E0510 00:02:00.965526 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:00.965611 kubelet[3407]: W0510 00:02:00.965548 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:00.965611 kubelet[3407]: E0510 00:02:00.965567 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:00.970452 containerd[1794]: time="2025-05-10T00:02:00.970102245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlwq5,Uid:7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd,Namespace:calico-system,Attempt:0,}" May 10 00:02:01.018143 containerd[1794]: time="2025-05-10T00:02:01.018052620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:01.018143 containerd[1794]: time="2025-05-10T00:02:01.018105380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:01.018143 containerd[1794]: time="2025-05-10T00:02:01.018116820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:01.018524 containerd[1794]: time="2025-05-10T00:02:01.018482940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:01.048281 containerd[1794]: time="2025-05-10T00:02:01.048235029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlwq5,Uid:7f53d18f-3ef5-4ff4-8bd1-f5d9d9bae9dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb0b193c03850f461abdaa2c6f16ae38d6cc9806b32780c4f5f0aa10fff6da1f\"" May 10 00:02:01.067073 kubelet[3407]: E0510 00:02:01.067015 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:01.067073 kubelet[3407]: W0510 00:02:01.067037 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:01.067073 kubelet[3407]: E0510 00:02:01.067056 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:01.067554 kubelet[3407]: E0510 00:02:01.067290 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:01.067554 kubelet[3407]: W0510 00:02:01.067301 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:01.067554 kubelet[3407]: E0510 00:02:01.067311 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:01.067554 kubelet[3407]: E0510 00:02:01.067473 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:01.067554 kubelet[3407]: W0510 00:02:01.067482 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:01.067554 kubelet[3407]: E0510 00:02:01.067502 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:01.067804 kubelet[3407]: E0510 00:02:01.067654 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:01.067804 kubelet[3407]: W0510 00:02:01.067662 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:01.067804 kubelet[3407]: E0510 00:02:01.067670 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:01.067885 kubelet[3407]: E0510 00:02:01.067871 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:01.067885 kubelet[3407]: W0510 00:02:01.067879 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:01.067934 kubelet[3407]: E0510 00:02:01.067898 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:01.070756 kubelet[3407]: E0510 00:02:01.070652 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:01.070756 kubelet[3407]: W0510 00:02:01.070755 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:01.070892 kubelet[3407]: E0510 00:02:01.070773 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:02.459076 kubelet[3407]: E0510 00:02:02.458602 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:02:02.895552 containerd[1794]: time="2025-05-10T00:02:02.894868091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:02.897220 containerd[1794]: time="2025-05-10T00:02:02.897185692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 10 00:02:02.901974 containerd[1794]: time="2025-05-10T00:02:02.901920813Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:02.906496 containerd[1794]: time="2025-05-10T00:02:02.906463495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:02.907696 containerd[1794]: time="2025-05-10T00:02:02.907260215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.973884182s" May 10 00:02:02.907696 containerd[1794]: time="2025-05-10T00:02:02.907303255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 10 00:02:02.910900 containerd[1794]: time="2025-05-10T00:02:02.909866136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 00:02:02.921388 containerd[1794]: time="2025-05-10T00:02:02.920842499Z" level=info msg="CreateContainer within sandbox \"0674b7706cc48fa1a7c4e30c1c6e05033cb8af0a0801d93ecaef43114148f8f7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 00:02:02.956980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865393982.mount: Deactivated successfully. May 10 00:02:02.976911 containerd[1794]: time="2025-05-10T00:02:02.976845797Z" level=info msg="CreateContainer within sandbox \"0674b7706cc48fa1a7c4e30c1c6e05033cb8af0a0801d93ecaef43114148f8f7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e318f7a0630afddd7ff8f1ad6439779c5d17bc5167ffea772f1d7953dbf6def5\"" May 10 00:02:02.978965 containerd[1794]: time="2025-05-10T00:02:02.977833517Z" level=info msg="StartContainer for \"e318f7a0630afddd7ff8f1ad6439779c5d17bc5167ffea772f1d7953dbf6def5\"" May 10 00:02:03.039427 containerd[1794]: time="2025-05-10T00:02:03.038460936Z" level=info msg="StartContainer for \"e318f7a0630afddd7ff8f1ad6439779c5d17bc5167ffea772f1d7953dbf6def5\" returns successfully" May 10 00:02:03.637994 kubelet[3407]: E0510 00:02:03.637955 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.637994 kubelet[3407]: W0510 00:02:03.637981 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.637994 kubelet[3407]: E0510 00:02:03.638000 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.638652 kubelet[3407]: E0510 00:02:03.638188 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.638652 kubelet[3407]: W0510 00:02:03.638196 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.638652 kubelet[3407]: E0510 00:02:03.638205 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.638652 kubelet[3407]: E0510 00:02:03.638333 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.638652 kubelet[3407]: W0510 00:02:03.638341 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.638652 kubelet[3407]: E0510 00:02:03.638348 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.638652 kubelet[3407]: E0510 00:02:03.638482 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.638652 kubelet[3407]: W0510 00:02:03.638490 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.638652 kubelet[3407]: E0510 00:02:03.638497 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.638652 kubelet[3407]: E0510 00:02:03.638635 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639087 kubelet[3407]: W0510 00:02:03.638643 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639087 kubelet[3407]: E0510 00:02:03.638651 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639087 kubelet[3407]: E0510 00:02:03.638822 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639087 kubelet[3407]: W0510 00:02:03.638832 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639087 kubelet[3407]: E0510 00:02:03.638840 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639087 kubelet[3407]: E0510 00:02:03.638969 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639087 kubelet[3407]: W0510 00:02:03.638977 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639087 kubelet[3407]: E0510 00:02:03.638985 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639351 kubelet[3407]: E0510 00:02:03.639123 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639351 kubelet[3407]: W0510 00:02:03.639132 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639351 kubelet[3407]: E0510 00:02:03.639139 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639351 kubelet[3407]: E0510 00:02:03.639268 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639351 kubelet[3407]: W0510 00:02:03.639276 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639351 kubelet[3407]: E0510 00:02:03.639283 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639620 kubelet[3407]: E0510 00:02:03.639404 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639620 kubelet[3407]: W0510 00:02:03.639412 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639620 kubelet[3407]: E0510 00:02:03.639419 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639620 kubelet[3407]: E0510 00:02:03.639590 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639620 kubelet[3407]: W0510 00:02:03.639599 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639620 kubelet[3407]: E0510 00:02:03.639608 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639861 kubelet[3407]: E0510 00:02:03.639757 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639861 kubelet[3407]: W0510 00:02:03.639766 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639861 kubelet[3407]: E0510 00:02:03.639774 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.639940 kubelet[3407]: E0510 00:02:03.639915 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.639940 kubelet[3407]: W0510 00:02:03.639924 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.639940 kubelet[3407]: E0510 00:02:03.639934 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.640086 kubelet[3407]: E0510 00:02:03.640070 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.640086 kubelet[3407]: W0510 00:02:03.640084 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.640144 kubelet[3407]: E0510 00:02:03.640092 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.640235 kubelet[3407]: E0510 00:02:03.640220 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.640271 kubelet[3407]: W0510 00:02:03.640235 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.640271 kubelet[3407]: E0510 00:02:03.640244 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.686877 kubelet[3407]: E0510 00:02:03.686749 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.686877 kubelet[3407]: W0510 00:02:03.686769 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.686877 kubelet[3407]: E0510 00:02:03.686790 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.687075 kubelet[3407]: E0510 00:02:03.687053 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.687075 kubelet[3407]: W0510 00:02:03.687070 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.687160 kubelet[3407]: E0510 00:02:03.687086 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.687370 kubelet[3407]: E0510 00:02:03.687242 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.687370 kubelet[3407]: W0510 00:02:03.687258 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.687370 kubelet[3407]: E0510 00:02:03.687268 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.687666 kubelet[3407]: E0510 00:02:03.687540 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.687666 kubelet[3407]: W0510 00:02:03.687559 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.687666 kubelet[3407]: E0510 00:02:03.687581 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.687902 kubelet[3407]: E0510 00:02:03.687887 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.687963 kubelet[3407]: W0510 00:02:03.687952 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.688026 kubelet[3407]: E0510 00:02:03.688015 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.688338 kubelet[3407]: E0510 00:02:03.688246 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.688338 kubelet[3407]: W0510 00:02:03.688259 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.688338 kubelet[3407]: E0510 00:02:03.688286 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.688505 kubelet[3407]: E0510 00:02:03.688491 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.688563 kubelet[3407]: W0510 00:02:03.688552 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.688697 kubelet[3407]: E0510 00:02:03.688623 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.688835 kubelet[3407]: E0510 00:02:03.688821 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.688892 kubelet[3407]: W0510 00:02:03.688881 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.689063 kubelet[3407]: E0510 00:02:03.688960 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.689176 kubelet[3407]: E0510 00:02:03.689163 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.689233 kubelet[3407]: W0510 00:02:03.689223 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.689295 kubelet[3407]: E0510 00:02:03.689284 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.689510 kubelet[3407]: E0510 00:02:03.689482 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.689510 kubelet[3407]: W0510 00:02:03.689499 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.689510 kubelet[3407]: E0510 00:02:03.689510 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.689650 kubelet[3407]: E0510 00:02:03.689637 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.689650 kubelet[3407]: W0510 00:02:03.689644 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.689694 kubelet[3407]: E0510 00:02:03.689652 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.689842 kubelet[3407]: E0510 00:02:03.689827 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.689842 kubelet[3407]: W0510 00:02:03.689839 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.689919 kubelet[3407]: E0510 00:02:03.689857 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.690273 kubelet[3407]: E0510 00:02:03.690249 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.690273 kubelet[3407]: W0510 00:02:03.690268 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.690350 kubelet[3407]: E0510 00:02:03.690287 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.690481 kubelet[3407]: E0510 00:02:03.690465 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.690481 kubelet[3407]: W0510 00:02:03.690478 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.690548 kubelet[3407]: E0510 00:02:03.690496 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.690661 kubelet[3407]: E0510 00:02:03.690645 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.690661 kubelet[3407]: W0510 00:02:03.690658 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.690739 kubelet[3407]: E0510 00:02:03.690671 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.691212 kubelet[3407]: E0510 00:02:03.691185 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.691212 kubelet[3407]: W0510 00:02:03.691206 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.691293 kubelet[3407]: E0510 00:02:03.691227 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.692305 kubelet[3407]: E0510 00:02:03.691964 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.692305 kubelet[3407]: W0510 00:02:03.691979 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.692305 kubelet[3407]: E0510 00:02:03.692135 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:02:03.692305 kubelet[3407]: W0510 00:02:03.692142 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:02:03.692305 kubelet[3407]: E0510 00:02:03.692152 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:03.692305 kubelet[3407]: E0510 00:02:03.692172 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:02:04.206783 containerd[1794]: time="2025-05-10T00:02:04.206259864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:04.210165 containerd[1794]: time="2025-05-10T00:02:04.210110225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 10 00:02:04.215656 containerd[1794]: time="2025-05-10T00:02:04.215602027Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:04.220489 containerd[1794]: time="2025-05-10T00:02:04.220422109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:04.221277 containerd[1794]: time="2025-05-10T00:02:04.221144069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.311240813s" May 10 00:02:04.221277 containerd[1794]: time="2025-05-10T00:02:04.221176109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 10 00:02:04.225759 containerd[1794]: time="2025-05-10T00:02:04.225615110Z" level=info msg="CreateContainer within sandbox \"fb0b193c03850f461abdaa2c6f16ae38d6cc9806b32780c4f5f0aa10fff6da1f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 00:02:04.258153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604879380.mount: Deactivated successfully. May 10 00:02:04.272564 containerd[1794]: time="2025-05-10T00:02:04.272503565Z" level=info msg="CreateContainer within sandbox \"fb0b193c03850f461abdaa2c6f16ae38d6cc9806b32780c4f5f0aa10fff6da1f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d45960a5c4da9233acb76082b3370606cb54c0a4f34cefc118d44e43253cb4ed\"" May 10 00:02:04.273217 containerd[1794]: time="2025-05-10T00:02:04.273186165Z" level=info msg="StartContainer for \"d45960a5c4da9233acb76082b3370606cb54c0a4f34cefc118d44e43253cb4ed\"" May 10 00:02:04.327942 containerd[1794]: time="2025-05-10T00:02:04.327865543Z" level=info msg="StartContainer for \"d45960a5c4da9233acb76082b3370606cb54c0a4f34cefc118d44e43253cb4ed\" returns successfully" May 10 00:02:04.458121 kubelet[3407]: E0510 00:02:04.457998 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:02:04.574384 kubelet[3407]: I0510 00:02:04.574249 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:02:04.596886 kubelet[3407]: I0510 00:02:04.596766 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f9787c7d-7nbc7" podStartSLOduration=3.620043364 podStartE2EDuration="5.596746307s" podCreationTimestamp="2025-05-10 00:01:59 +0000 UTC" firstStartedPulling="2025-05-10 00:02:00.932332633 +0000 UTC m=+25.587470465" lastFinishedPulling="2025-05-10 00:02:02.909035576 +0000 UTC m=+27.564173408" observedRunningTime="2025-05-10 00:02:03.585487749 +0000 UTC m=+28.240625581" watchObservedRunningTime="2025-05-10 00:02:04.596746307 +0000 UTC m=+29.251884139" May 10 00:02:04.914144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d45960a5c4da9233acb76082b3370606cb54c0a4f34cefc118d44e43253cb4ed-rootfs.mount: Deactivated successfully. May 10 00:02:05.236843 containerd[1794]: time="2025-05-10T00:02:05.236740949Z" level=info msg="shim disconnected" id=d45960a5c4da9233acb76082b3370606cb54c0a4f34cefc118d44e43253cb4ed namespace=k8s.io May 10 00:02:05.236843 containerd[1794]: time="2025-05-10T00:02:05.236801869Z" level=warning msg="cleaning up after shim disconnected" id=d45960a5c4da9233acb76082b3370606cb54c0a4f34cefc118d44e43253cb4ed namespace=k8s.io May 10 00:02:05.236843 containerd[1794]: time="2025-05-10T00:02:05.236810189Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:02:05.580173 containerd[1794]: time="2025-05-10T00:02:05.578587217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 00:02:06.458462 kubelet[3407]: E0510 00:02:06.458407 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:02:08.394556 containerd[1794]: time="2025-05-10T00:02:08.393774100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:08.396417 containerd[1794]: time="2025-05-10T00:02:08.396391101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 10 00:02:08.399457 containerd[1794]: time="2025-05-10T00:02:08.399428541Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:08.404403 containerd[1794]: time="2025-05-10T00:02:08.404374743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:08.405288 containerd[1794]: time="2025-05-10T00:02:08.404945903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.825997806s" May 10 00:02:08.405683 containerd[1794]: time="2025-05-10T00:02:08.405660743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 10 00:02:08.409340 containerd[1794]: time="2025-05-10T00:02:08.409315144Z" level=info msg="CreateContainer within sandbox \"fb0b193c03850f461abdaa2c6f16ae38d6cc9806b32780c4f5f0aa10fff6da1f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 00:02:08.443887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713854895.mount: Deactivated successfully. May 10 00:02:08.458127 kubelet[3407]: E0510 00:02:08.458073 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:02:08.459157 containerd[1794]: time="2025-05-10T00:02:08.458842636Z" level=info msg="CreateContainer within sandbox \"fb0b193c03850f461abdaa2c6f16ae38d6cc9806b32780c4f5f0aa10fff6da1f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b89ca8b4715fb42fb723ea450b99790494f29aa19285d7dfdabcd5b9b55a3981\"" May 10 00:02:08.459648 containerd[1794]: time="2025-05-10T00:02:08.459622316Z" level=info msg="StartContainer for \"b89ca8b4715fb42fb723ea450b99790494f29aa19285d7dfdabcd5b9b55a3981\"" May 10 00:02:08.489342 systemd[1]: run-containerd-runc-k8s.io-b89ca8b4715fb42fb723ea450b99790494f29aa19285d7dfdabcd5b9b55a3981-runc.elXnuA.mount: Deactivated successfully. May 10 00:02:08.518106 containerd[1794]: time="2025-05-10T00:02:08.518048811Z" level=info msg="StartContainer for \"b89ca8b4715fb42fb723ea450b99790494f29aa19285d7dfdabcd5b9b55a3981\" returns successfully" May 10 00:02:09.524433 containerd[1794]: time="2025-05-10T00:02:09.524381024Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:02:09.542714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b89ca8b4715fb42fb723ea450b99790494f29aa19285d7dfdabcd5b9b55a3981-rootfs.mount: Deactivated successfully. May 10 00:02:09.587193 kubelet[3407]: I0510 00:02:09.586423 3407 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:02:09.632626 kubelet[3407]: I0510 00:02:09.631866 3407 topology_manager.go:215] "Topology Admit Handler" podUID="4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d" podNamespace="calico-system" podName="calico-kube-controllers-7db954c79f-lcw2d" May 10 00:02:09.638529 kubelet[3407]: I0510 00:02:09.637912 3407 topology_manager.go:215] "Topology Admit Handler" podUID="33a5d1c8-646f-487e-a061-b26667f1063e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fkpjk" May 10 00:02:09.639015 kubelet[3407]: I0510 00:02:09.638657 3407 topology_manager.go:215] "Topology Admit Handler" podUID="0b77bf54-6b1a-44ba-a50d-01c24a8c08ab" podNamespace="calico-apiserver" podName="calico-apiserver-794797747f-x94dk" May 10 00:02:09.640741 kubelet[3407]: I0510 00:02:09.640383 3407 topology_manager.go:215] "Topology Admit Handler" podUID="0d72ed92-d321-47fa-9ed1-47ca14227fe8" podNamespace="calico-apiserver" podName="calico-apiserver-794797747f-2h685" May 10 00:02:09.641383 kubelet[3407]: I0510 00:02:09.641062 3407 topology_manager.go:215] "Topology Admit Handler" podUID="ffcbd6aa-6c16-4f74-ab76-6cff432c2624" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jl5p6" May 10 00:02:09.728507 kubelet[3407]: I0510 00:02:09.728328 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d-tigera-ca-bundle\") pod \"calico-kube-controllers-7db954c79f-lcw2d\" (UID: \"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d\") " pod="calico-system/calico-kube-controllers-7db954c79f-lcw2d" May 10 00:02:09.728507 kubelet[3407]: I0510 00:02:09.728383 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b77bf54-6b1a-44ba-a50d-01c24a8c08ab-calico-apiserver-certs\") pod \"calico-apiserver-794797747f-x94dk\" (UID: \"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab\") " pod="calico-apiserver/calico-apiserver-794797747f-x94dk" May 10 00:02:09.728507 kubelet[3407]: I0510 00:02:09.728404 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8m7\" (UniqueName: \"kubernetes.io/projected/0b77bf54-6b1a-44ba-a50d-01c24a8c08ab-kube-api-access-gd8m7\") pod \"calico-apiserver-794797747f-x94dk\" (UID: \"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab\") " pod="calico-apiserver/calico-apiserver-794797747f-x94dk" May 10 00:02:09.728507 kubelet[3407]: I0510 00:02:09.728424 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d72ed92-d321-47fa-9ed1-47ca14227fe8-calico-apiserver-certs\") pod \"calico-apiserver-794797747f-2h685\" (UID: \"0d72ed92-d321-47fa-9ed1-47ca14227fe8\") " pod="calico-apiserver/calico-apiserver-794797747f-2h685" May 10 00:02:09.728507 kubelet[3407]: I0510 00:02:09.728444 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33a5d1c8-646f-487e-a061-b26667f1063e-config-volume\") pod \"coredns-7db6d8ff4d-fkpjk\" (UID: \"33a5d1c8-646f-487e-a061-b26667f1063e\") " pod="kube-system/coredns-7db6d8ff4d-fkpjk" May 10 00:02:09.729213 kubelet[3407]: I0510 00:02:09.728463 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzxbx\" (UniqueName: \"kubernetes.io/projected/4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d-kube-api-access-dzxbx\") pod \"calico-kube-controllers-7db954c79f-lcw2d\" (UID: \"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d\") " pod="calico-system/calico-kube-controllers-7db954c79f-lcw2d" May 10 00:02:09.729213 kubelet[3407]: I0510 00:02:09.728495 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r2p7\" (UniqueName: \"kubernetes.io/projected/ffcbd6aa-6c16-4f74-ab76-6cff432c2624-kube-api-access-6r2p7\") pod \"coredns-7db6d8ff4d-jl5p6\" (UID: \"ffcbd6aa-6c16-4f74-ab76-6cff432c2624\") " pod="kube-system/coredns-7db6d8ff4d-jl5p6" May 10 00:02:09.729213 kubelet[3407]: I0510 00:02:09.728521 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffcbd6aa-6c16-4f74-ab76-6cff432c2624-config-volume\") pod \"coredns-7db6d8ff4d-jl5p6\" (UID: \"ffcbd6aa-6c16-4f74-ab76-6cff432c2624\") " pod="kube-system/coredns-7db6d8ff4d-jl5p6" May 10 00:02:09.729213 kubelet[3407]: I0510 00:02:09.728542 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k57b\" (UniqueName: \"kubernetes.io/projected/0d72ed92-d321-47fa-9ed1-47ca14227fe8-kube-api-access-2k57b\") pod \"calico-apiserver-794797747f-2h685\" (UID: \"0d72ed92-d321-47fa-9ed1-47ca14227fe8\") " pod="calico-apiserver/calico-apiserver-794797747f-2h685" May 10 00:02:09.729213 kubelet[3407]: I0510 00:02:09.728567 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csnkg\" (UniqueName: \"kubernetes.io/projected/33a5d1c8-646f-487e-a061-b26667f1063e-kube-api-access-csnkg\") pod \"coredns-7db6d8ff4d-fkpjk\" (UID: \"33a5d1c8-646f-487e-a061-b26667f1063e\") " pod="kube-system/coredns-7db6d8ff4d-fkpjk" May 10 00:02:10.676666 containerd[1794]: time="2025-05-10T00:02:10.675647113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jl5p6,Uid:ffcbd6aa-6c16-4f74-ab76-6cff432c2624,Namespace:kube-system,Attempt:0,}" May 10 00:02:10.676666 containerd[1794]: time="2025-05-10T00:02:10.675708073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db954c79f-lcw2d,Uid:4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d,Namespace:calico-system,Attempt:0,}" May 10 00:02:10.676666 containerd[1794]: time="2025-05-10T00:02:10.675966353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wwnd,Uid:a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb,Namespace:calico-system,Attempt:0,}" May 10 00:02:10.677243 containerd[1794]: time="2025-05-10T00:02:10.677043834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-x94dk,Uid:0b77bf54-6b1a-44ba-a50d-01c24a8c08ab,Namespace:calico-apiserver,Attempt:0,}" May 10 00:02:10.678013 containerd[1794]: time="2025-05-10T00:02:10.677275194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-2h685,Uid:0d72ed92-d321-47fa-9ed1-47ca14227fe8,Namespace:calico-apiserver,Attempt:0,}" May 10 00:02:10.678013 containerd[1794]: time="2025-05-10T00:02:10.677450754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fkpjk,Uid:33a5d1c8-646f-487e-a061-b26667f1063e,Namespace:kube-system,Attempt:0,}" May 10 00:02:10.722569 containerd[1794]: time="2025-05-10T00:02:10.722504045Z" level=info msg="shim disconnected" id=b89ca8b4715fb42fb723ea450b99790494f29aa19285d7dfdabcd5b9b55a3981 namespace=k8s.io May 10 00:02:10.722700 containerd[1794]: time="2025-05-10T00:02:10.722561565Z" level=warning msg="cleaning up after shim disconnected" id=b89ca8b4715fb42fb723ea450b99790494f29aa19285d7dfdabcd5b9b55a3981 namespace=k8s.io May 10 00:02:10.722700 containerd[1794]: time="2025-05-10T00:02:10.722631525Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:02:10.968776 containerd[1794]: time="2025-05-10T00:02:10.967695627Z" level=error msg="Failed to destroy network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:10.969733 containerd[1794]: time="2025-05-10T00:02:10.969428387Z" level=error msg="encountered an error cleaning up failed sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:10.969980 containerd[1794]: time="2025-05-10T00:02:10.969942747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db954c79f-lcw2d,Uid:4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:10.970345 kubelet[3407]: E0510 00:02:10.970296 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:10.971873 kubelet[3407]: E0510 00:02:10.970381 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db954c79f-lcw2d" May 10 00:02:10.971873 kubelet[3407]: E0510 00:02:10.970406 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db954c79f-lcw2d" May 10 00:02:10.971873 kubelet[3407]: E0510 00:02:10.970464 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7db954c79f-lcw2d_calico-system(4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7db954c79f-lcw2d_calico-system(4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db954c79f-lcw2d" podUID="4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d" May 10 00:02:11.000939 containerd[1794]: time="2025-05-10T00:02:11.000890315Z" level=error msg="Failed to destroy network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.002988 containerd[1794]: time="2025-05-10T00:02:11.002853235Z" level=error msg="encountered an error cleaning up failed sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.002988 containerd[1794]: time="2025-05-10T00:02:11.002918675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jl5p6,Uid:ffcbd6aa-6c16-4f74-ab76-6cff432c2624,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.003244 kubelet[3407]: E0510 00:02:11.003145 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.003244 kubelet[3407]: E0510 00:02:11.003197 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jl5p6" May 10 00:02:11.003244 kubelet[3407]: E0510 00:02:11.003216 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jl5p6" May 10 00:02:11.003349 kubelet[3407]: E0510 00:02:11.003254 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jl5p6_kube-system(ffcbd6aa-6c16-4f74-ab76-6cff432c2624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jl5p6_kube-system(ffcbd6aa-6c16-4f74-ab76-6cff432c2624)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jl5p6" podUID="ffcbd6aa-6c16-4f74-ab76-6cff432c2624" May 10 00:02:11.042464 containerd[1794]: time="2025-05-10T00:02:11.042183325Z" level=error msg="Failed to destroy network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.042897 containerd[1794]: time="2025-05-10T00:02:11.042779445Z" level=error msg="encountered an error cleaning up failed sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.042897 containerd[1794]: time="2025-05-10T00:02:11.042852406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-2h685,Uid:0d72ed92-d321-47fa-9ed1-47ca14227fe8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.043167 kubelet[3407]: E0510 00:02:11.043126 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.043234 kubelet[3407]: E0510 00:02:11.043189 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794797747f-2h685" May 10 00:02:11.043234 kubelet[3407]: E0510 00:02:11.043208 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794797747f-2h685" May 10 00:02:11.043291 kubelet[3407]: E0510 00:02:11.043249 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-794797747f-2h685_calico-apiserver(0d72ed92-d321-47fa-9ed1-47ca14227fe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-794797747f-2h685_calico-apiserver(0d72ed92-d321-47fa-9ed1-47ca14227fe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794797747f-2h685" podUID="0d72ed92-d321-47fa-9ed1-47ca14227fe8" May 10 00:02:11.050180 containerd[1794]: time="2025-05-10T00:02:11.049633687Z" level=error msg="Failed to destroy network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.050180 containerd[1794]: time="2025-05-10T00:02:11.050056447Z" level=error msg="encountered an error cleaning up failed sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.050180 containerd[1794]: time="2025-05-10T00:02:11.050112247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-x94dk,Uid:0b77bf54-6b1a-44ba-a50d-01c24a8c08ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.050912 kubelet[3407]: E0510 00:02:11.050400 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.050912 kubelet[3407]: E0510 00:02:11.050493 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794797747f-x94dk" May 10 00:02:11.050912 kubelet[3407]: E0510 00:02:11.050518 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794797747f-x94dk" May 10 00:02:11.051086 kubelet[3407]: E0510 00:02:11.050555 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-794797747f-x94dk_calico-apiserver(0b77bf54-6b1a-44ba-a50d-01c24a8c08ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-794797747f-x94dk_calico-apiserver(0b77bf54-6b1a-44ba-a50d-01c24a8c08ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794797747f-x94dk" podUID="0b77bf54-6b1a-44ba-a50d-01c24a8c08ab" May 10 00:02:11.055910 containerd[1794]: time="2025-05-10T00:02:11.055862289Z" level=error msg="Failed to destroy network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.056473 containerd[1794]: time="2025-05-10T00:02:11.056350129Z" level=error msg="encountered an error cleaning up failed sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.056532 containerd[1794]: time="2025-05-10T00:02:11.056494649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wwnd,Uid:a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.057036 kubelet[3407]: E0510 00:02:11.056713 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.057036 kubelet[3407]: E0510 00:02:11.056779 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wwnd" May 10 00:02:11.057036 kubelet[3407]: E0510 00:02:11.056798 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wwnd" May 10 00:02:11.057180 kubelet[3407]: E0510 00:02:11.056832 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7wwnd_calico-system(a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7wwnd_calico-system(a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:02:11.057920 containerd[1794]: time="2025-05-10T00:02:11.057884089Z" level=error msg="Failed to destroy network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.058805 containerd[1794]: time="2025-05-10T00:02:11.058700209Z" level=error msg="encountered an error cleaning up failed sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.058882 containerd[1794]: time="2025-05-10T00:02:11.058810250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fkpjk,Uid:33a5d1c8-646f-487e-a061-b26667f1063e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.059083 kubelet[3407]: E0510 00:02:11.059038 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.059154 kubelet[3407]: E0510 00:02:11.059092 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fkpjk" May 10 00:02:11.059154 kubelet[3407]: E0510 00:02:11.059110 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fkpjk" May 10 00:02:11.059205 kubelet[3407]: E0510 00:02:11.059142 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fkpjk_kube-system(33a5d1c8-646f-487e-a061-b26667f1063e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fkpjk_kube-system(33a5d1c8-646f-487e-a061-b26667f1063e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fkpjk" podUID="33a5d1c8-646f-487e-a061-b26667f1063e" May 10 00:02:11.544810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275-shm.mount: Deactivated successfully. May 10 00:02:11.544951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311-shm.mount: Deactivated successfully. May 10 00:02:11.592263 kubelet[3407]: I0510 00:02:11.592229 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:11.593454 containerd[1794]: time="2025-05-10T00:02:11.593230624Z" level=info msg="StopPodSandbox for \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\"" May 10 00:02:11.594551 containerd[1794]: time="2025-05-10T00:02:11.593564384Z" level=info msg="Ensure that sandbox 500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f in task-service has been cleanup successfully" May 10 00:02:11.594631 kubelet[3407]: I0510 00:02:11.594504 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:11.595558 containerd[1794]: time="2025-05-10T00:02:11.595390864Z" level=info msg="StopPodSandbox for \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\"" May 10 00:02:11.595955 containerd[1794]: time="2025-05-10T00:02:11.595755744Z" level=info msg="Ensure that sandbox f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc in task-service has been cleanup successfully" May 10 00:02:11.597403 kubelet[3407]: I0510 00:02:11.597358 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:11.599530 containerd[1794]: time="2025-05-10T00:02:11.599501065Z" level=info msg="StopPodSandbox for \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\"" May 10 00:02:11.600001 containerd[1794]: time="2025-05-10T00:02:11.599976025Z" level=info msg="Ensure that sandbox 73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311 in task-service has been cleanup successfully" May 10 00:02:11.600395 kubelet[3407]: I0510 00:02:11.600357 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:11.602085 containerd[1794]: time="2025-05-10T00:02:11.601377586Z" level=info msg="StopPodSandbox for \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\"" May 10 00:02:11.604057 containerd[1794]: time="2025-05-10T00:02:11.602966226Z" level=info msg="Ensure that sandbox 83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b in task-service has been cleanup successfully" May 10 00:02:11.604756 kubelet[3407]: I0510 00:02:11.604574 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:11.605619 containerd[1794]: time="2025-05-10T00:02:11.605577387Z" level=info msg="StopPodSandbox for \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\"" May 10 00:02:11.605766 containerd[1794]: time="2025-05-10T00:02:11.605741187Z" level=info msg="Ensure that sandbox 994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275 in task-service has been cleanup successfully" May 10 00:02:11.622060 containerd[1794]: time="2025-05-10T00:02:11.621172391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 00:02:11.647467 kubelet[3407]: I0510 00:02:11.647338 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:11.652436 containerd[1794]: time="2025-05-10T00:02:11.650679278Z" level=info msg="StopPodSandbox for \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\"" May 10 00:02:11.653117 containerd[1794]: time="2025-05-10T00:02:11.652987239Z" level=info msg="Ensure that sandbox cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e in task-service has been cleanup successfully" May 10 00:02:11.697268 containerd[1794]: time="2025-05-10T00:02:11.697187130Z" level=error msg="StopPodSandbox for \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\" failed" error="failed to destroy network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.698143 kubelet[3407]: E0510 00:02:11.697395 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:11.698143 kubelet[3407]: E0510 00:02:11.697448 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc"} May 10 00:02:11.698143 kubelet[3407]: E0510 00:02:11.697504 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:02:11.698143 kubelet[3407]: E0510 00:02:11.697525 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wwnd" podUID="a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb" May 10 00:02:11.707258 containerd[1794]: time="2025-05-10T00:02:11.707204372Z" level=error msg="StopPodSandbox for \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\" failed" error="failed to destroy network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.707788 kubelet[3407]: E0510 00:02:11.707749 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:11.708183 kubelet[3407]: E0510 00:02:11.707929 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e"} May 10 00:02:11.708183 kubelet[3407]: E0510 00:02:11.708124 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33a5d1c8-646f-487e-a061-b26667f1063e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:02:11.708183 kubelet[3407]: E0510 00:02:11.708151 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33a5d1c8-646f-487e-a061-b26667f1063e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fkpjk" podUID="33a5d1c8-646f-487e-a061-b26667f1063e" May 10 00:02:11.720897 containerd[1794]: time="2025-05-10T00:02:11.720847856Z" level=error msg="StopPodSandbox for \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\" failed" error="failed to destroy network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.721420 containerd[1794]: time="2025-05-10T00:02:11.721107976Z" level=error msg="StopPodSandbox for \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\" failed" error="failed to destroy network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.721902 kubelet[3407]: E0510 00:02:11.721757 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:11.722221 kubelet[3407]: E0510 00:02:11.721923 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275"} May 10 00:02:11.722221 kubelet[3407]: E0510 00:02:11.721826 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:11.722221 kubelet[3407]: E0510 00:02:11.721959 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b"} May 10 00:02:11.722221 kubelet[3407]: E0510 00:02:11.721985 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d72ed92-d321-47fa-9ed1-47ca14227fe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:02:11.722221 kubelet[3407]: E0510 00:02:11.722008 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d72ed92-d321-47fa-9ed1-47ca14227fe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794797747f-2h685" podUID="0d72ed92-d321-47fa-9ed1-47ca14227fe8" May 10 00:02:11.722476 kubelet[3407]: E0510 00:02:11.722140 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffcbd6aa-6c16-4f74-ab76-6cff432c2624\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:02:11.722476 kubelet[3407]: E0510 00:02:11.722167 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffcbd6aa-6c16-4f74-ab76-6cff432c2624\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jl5p6" podUID="ffcbd6aa-6c16-4f74-ab76-6cff432c2624" May 10 00:02:11.723148 containerd[1794]: time="2025-05-10T00:02:11.722832536Z" level=error msg="StopPodSandbox for \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\" failed" error="failed to destroy network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.723258 kubelet[3407]: E0510 00:02:11.723014 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:11.723258 kubelet[3407]: E0510 00:02:11.723045 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311"} May 10 00:02:11.723258 kubelet[3407]: E0510 00:02:11.723084 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:02:11.723258 kubelet[3407]: E0510 00:02:11.723103 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db954c79f-lcw2d" podUID="4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d" May 10 00:02:11.723979 containerd[1794]: time="2025-05-10T00:02:11.723845697Z" level=error msg="StopPodSandbox for \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\" failed" error="failed to destroy network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:02:11.724093 kubelet[3407]: E0510 00:02:11.724054 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:11.724135 kubelet[3407]: E0510 00:02:11.724094 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f"} May 10 00:02:11.724163 kubelet[3407]: E0510 00:02:11.724129 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:02:11.724163 kubelet[3407]: E0510 00:02:11.724151 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794797747f-x94dk" podUID="0b77bf54-6b1a-44ba-a50d-01c24a8c08ab" May 10 00:02:15.728026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686595116.mount: Deactivated successfully. May 10 00:02:16.003931 kubelet[3407]: I0510 00:02:16.003343 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:02:16.251746 containerd[1794]: time="2025-05-10T00:02:16.251029652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:16.256293 containerd[1794]: time="2025-05-10T00:02:16.256199614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 10 00:02:16.260296 containerd[1794]: time="2025-05-10T00:02:16.260240615Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:16.265008 containerd[1794]: time="2025-05-10T00:02:16.264951737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:16.266064 containerd[1794]: time="2025-05-10T00:02:16.265525257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.640889265s" May 10 00:02:16.266064 containerd[1794]: time="2025-05-10T00:02:16.265562977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 10 00:02:16.276228 containerd[1794]: time="2025-05-10T00:02:16.276077781Z" level=info msg="CreateContainer within sandbox \"fb0b193c03850f461abdaa2c6f16ae38d6cc9806b32780c4f5f0aa10fff6da1f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 00:02:16.341953 containerd[1794]: time="2025-05-10T00:02:16.341897483Z" level=info msg="CreateContainer within sandbox \"fb0b193c03850f461abdaa2c6f16ae38d6cc9806b32780c4f5f0aa10fff6da1f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a0222f80141386513c7fad6e499a6e6a227a3735cefd97e0c7d03b23055b359\"" May 10 00:02:16.342763 containerd[1794]: time="2025-05-10T00:02:16.342671963Z" level=info msg="StartContainer for \"9a0222f80141386513c7fad6e499a6e6a227a3735cefd97e0c7d03b23055b359\"" May 10 00:02:16.402516 containerd[1794]: time="2025-05-10T00:02:16.402458704Z" level=info msg="StartContainer for \"9a0222f80141386513c7fad6e499a6e6a227a3735cefd97e0c7d03b23055b359\" returns successfully" May 10 00:02:16.683602 kubelet[3407]: I0510 00:02:16.683522 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wlwq5" podStartSLOduration=2.466774213 podStartE2EDuration="17.68350268s" podCreationTimestamp="2025-05-10 00:01:59 +0000 UTC" firstStartedPulling="2025-05-10 00:02:01.04945423 +0000 UTC m=+25.704592062" lastFinishedPulling="2025-05-10 00:02:16.266182697 +0000 UTC m=+40.921320529" observedRunningTime="2025-05-10 00:02:16.68234872 +0000 UTC m=+41.337486552" watchObservedRunningTime="2025-05-10 00:02:16.68350268 +0000 UTC m=+41.338640512" May 10 00:02:16.771146 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 00:02:16.771262 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 00:02:18.380832 kernel: bpftool[4770]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 00:02:18.670979 systemd-networkd[1380]: vxlan.calico: Link UP May 10 00:02:18.670993 systemd-networkd[1380]: vxlan.calico: Gained carrier May 10 00:02:19.711922 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL May 10 00:02:22.460972 containerd[1794]: time="2025-05-10T00:02:22.460886210Z" level=info msg="StopPodSandbox for \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\"" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.534 [INFO][4880] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.536 [INFO][4880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" iface="eth0" netns="/var/run/netns/cni-076d49fa-cd44-e7b8-76b5-d83b175773fc" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.536 [INFO][4880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" iface="eth0" netns="/var/run/netns/cni-076d49fa-cd44-e7b8-76b5-d83b175773fc" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.536 [INFO][4880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" iface="eth0" netns="/var/run/netns/cni-076d49fa-cd44-e7b8-76b5-d83b175773fc" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.536 [INFO][4880] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.536 [INFO][4880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.556 [INFO][4887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.556 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.556 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.564 [WARNING][4887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.564 [INFO][4887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.566 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:22.569978 containerd[1794]: 2025-05-10 00:02:22.568 [INFO][4880] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:22.570456 containerd[1794]: time="2025-05-10T00:02:22.570144287Z" level=info msg="TearDown network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\" successfully" May 10 00:02:22.570456 containerd[1794]: time="2025-05-10T00:02:22.570174047Z" level=info msg="StopPodSandbox for \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\" returns successfully" May 10 00:02:22.573393 containerd[1794]: time="2025-05-10T00:02:22.573348528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wwnd,Uid:a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb,Namespace:calico-system,Attempt:1,}" May 10 00:02:22.574235 systemd[1]: run-netns-cni\x2d076d49fa\x2dcd44\x2de7b8\x2d76b5\x2dd83b175773fc.mount: Deactivated successfully. May 10 00:02:22.723174 systemd-networkd[1380]: caliceec49c8f7e: Link UP May 10 00:02:22.725025 systemd-networkd[1380]: caliceec49c8f7e: Gained carrier May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.647 [INFO][4893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0 csi-node-driver- calico-system a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb 749 0 2025-05-10 00:01:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-4cc30cd86c csi-node-driver-7wwnd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliceec49c8f7e [] []}} ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.647 [INFO][4893] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.674 [INFO][4906] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" HandleID="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.685 [INFO][4906] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" HandleID="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-4cc30cd86c", "pod":"csi-node-driver-7wwnd", "timestamp":"2025-05-10 00:02:22.674271642 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4cc30cd86c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.686 [INFO][4906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.686 [INFO][4906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.686 [INFO][4906] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4cc30cd86c' May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.687 [INFO][4906] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.691 [INFO][4906] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.695 [INFO][4906] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.697 [INFO][4906] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.699 [INFO][4906] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.699 [INFO][4906] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.701 [INFO][4906] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17 May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.706 [INFO][4906] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.715 [INFO][4906] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.65/26] block=192.168.34.64/26 handle="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.716 [INFO][4906] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.65/26] handle="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.716 [INFO][4906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:22.752585 containerd[1794]: 2025-05-10 00:02:22.716 [INFO][4906] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.65/26] IPv6=[] ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" HandleID="k8s-pod-network.d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.753253 containerd[1794]: 2025-05-10 00:02:22.718 [INFO][4893] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"", Pod:"csi-node-driver-7wwnd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliceec49c8f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:22.753253 containerd[1794]: 2025-05-10 00:02:22.718 [INFO][4893] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.65/32] ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.753253 containerd[1794]: 2025-05-10 00:02:22.718 [INFO][4893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceec49c8f7e ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.753253 containerd[1794]: 2025-05-10 00:02:22.725 [INFO][4893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.753253 containerd[1794]: 2025-05-10 00:02:22.725 [INFO][4893] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17", Pod:"csi-node-driver-7wwnd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliceec49c8f7e", MAC:"7e:75:69:84:9e:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:22.753253 containerd[1794]: 2025-05-10 00:02:22.749 [INFO][4893] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17" Namespace="calico-system" Pod="csi-node-driver-7wwnd" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:22.779599 containerd[1794]: time="2025-05-10T00:02:22.779366437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:22.779599 containerd[1794]: time="2025-05-10T00:02:22.779435557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:22.779599 containerd[1794]: time="2025-05-10T00:02:22.779451197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:22.779599 containerd[1794]: time="2025-05-10T00:02:22.779551397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:22.821244 containerd[1794]: time="2025-05-10T00:02:22.821193811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wwnd,Uid:a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17\"" May 10 00:02:22.823499 containerd[1794]: time="2025-05-10T00:02:22.823387812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:02:23.988762 containerd[1794]: time="2025-05-10T00:02:23.988471282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:23.990833 containerd[1794]: time="2025-05-10T00:02:23.990796443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:02:23.995435 containerd[1794]: time="2025-05-10T00:02:23.995369884Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:24.000203 containerd[1794]: time="2025-05-10T00:02:24.000146006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:24.001118 containerd[1794]: time="2025-05-10T00:02:24.000671446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.177252074s" May 10 00:02:24.001118 containerd[1794]: time="2025-05-10T00:02:24.000707486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:02:24.005277 containerd[1794]: time="2025-05-10T00:02:24.005234047Z" level=info msg="CreateContainer within sandbox \"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:02:24.042060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055259770.mount: Deactivated successfully. May 10 00:02:24.051976 containerd[1794]: time="2025-05-10T00:02:24.051916503Z" level=info msg="CreateContainer within sandbox \"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"36a3b1bcfab59d7d37b144282cd70b5edd79ddc49e344382894d6678dbb2df61\"" May 10 00:02:24.052798 containerd[1794]: time="2025-05-10T00:02:24.052763023Z" level=info msg="StartContainer for \"36a3b1bcfab59d7d37b144282cd70b5edd79ddc49e344382894d6678dbb2df61\"" May 10 00:02:24.064939 systemd-networkd[1380]: caliceec49c8f7e: Gained IPv6LL May 10 00:02:24.123978 containerd[1794]: time="2025-05-10T00:02:24.123914327Z" level=info msg="StartContainer for \"36a3b1bcfab59d7d37b144282cd70b5edd79ddc49e344382894d6678dbb2df61\" returns successfully" May 10 00:02:24.126933 containerd[1794]: time="2025-05-10T00:02:24.126839288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:02:24.458741 containerd[1794]: time="2025-05-10T00:02:24.458680119Z" level=info msg="StopPodSandbox for \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\"" May 10 00:02:24.461489 containerd[1794]: time="2025-05-10T00:02:24.460431480Z" level=info msg="StopPodSandbox for \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\"" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5032] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5032] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" iface="eth0" netns="/var/run/netns/cni-41466e22-5b61-0375-7aeb-039e3ee20989" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5032] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" iface="eth0" netns="/var/run/netns/cni-41466e22-5b61-0375-7aeb-039e3ee20989" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.526 [INFO][5032] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" iface="eth0" netns="/var/run/netns/cni-41466e22-5b61-0375-7aeb-039e3ee20989" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.526 [INFO][5032] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.526 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.549 [INFO][5044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.549 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.549 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.560 [WARNING][5044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.560 [INFO][5044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.562 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:24.566467 containerd[1794]: 2025-05-10 00:02:24.564 [INFO][5032] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:24.570103 containerd[1794]: time="2025-05-10T00:02:24.569416036Z" level=info msg="TearDown network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\" successfully" May 10 00:02:24.570103 containerd[1794]: time="2025-05-10T00:02:24.569467636Z" level=info msg="StopPodSandbox for \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\" returns successfully" May 10 00:02:24.570862 containerd[1794]: time="2025-05-10T00:02:24.570303797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db954c79f-lcw2d,Uid:4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d,Namespace:calico-system,Attempt:1,}" May 10 00:02:24.573664 systemd[1]: run-netns-cni\x2d41466e22\x2d5b61\x2d0375\x2d7aeb\x2d039e3ee20989.mount: Deactivated successfully. May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.524 [INFO][5025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5025] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" iface="eth0" netns="/var/run/netns/cni-8ff87602-f18b-334a-f474-7299d318ff7b" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5025] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" iface="eth0" netns="/var/run/netns/cni-8ff87602-f18b-334a-f474-7299d318ff7b" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5025] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" iface="eth0" netns="/var/run/netns/cni-8ff87602-f18b-334a-f474-7299d318ff7b" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.525 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.563 [INFO][5042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.563 [INFO][5042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.564 [INFO][5042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.576 [WARNING][5042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.576 [INFO][5042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.578 [INFO][5042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:24.581757 containerd[1794]: 2025-05-10 00:02:24.580 [INFO][5025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:24.582600 containerd[1794]: time="2025-05-10T00:02:24.582558641Z" level=info msg="TearDown network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\" successfully" May 10 00:02:24.582600 containerd[1794]: time="2025-05-10T00:02:24.582594521Z" level=info msg="StopPodSandbox for \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\" returns successfully" May 10 00:02:24.583895 containerd[1794]: time="2025-05-10T00:02:24.583820321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-x94dk,Uid:0b77bf54-6b1a-44ba-a50d-01c24a8c08ab,Namespace:calico-apiserver,Attempt:1,}" May 10 00:02:24.766210 systemd-networkd[1380]: cali131541be6c6: Link UP May 10 00:02:24.768233 systemd-networkd[1380]: cali131541be6c6: Gained carrier May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.656 [INFO][5057] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0 calico-kube-controllers-7db954c79f- calico-system 4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d 766 0 2025-05-10 00:01:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7db954c79f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-4cc30cd86c calico-kube-controllers-7db954c79f-lcw2d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali131541be6c6 [] []}} ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.656 [INFO][5057] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.702 [INFO][5078] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" HandleID="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.715 [INFO][5078] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" HandleID="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011b500), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-4cc30cd86c", "pod":"calico-kube-controllers-7db954c79f-lcw2d", "timestamp":"2025-05-10 00:02:24.702109001 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4cc30cd86c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.715 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.715 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.715 [INFO][5078] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4cc30cd86c' May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.717 [INFO][5078] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.724 [INFO][5078] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.733 [INFO][5078] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.736 [INFO][5078] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.739 [INFO][5078] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.739 [INFO][5078] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.741 [INFO][5078] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972 May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.747 [INFO][5078] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.756 [INFO][5078] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.66/26] block=192.168.34.64/26 handle="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.756 [INFO][5078] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.66/26] handle="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.756 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:24.796880 containerd[1794]: 2025-05-10 00:02:24.756 [INFO][5078] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.66/26] IPv6=[] ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" HandleID="k8s-pod-network.9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.798376 containerd[1794]: 2025-05-10 00:02:24.760 [INFO][5057] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0", GenerateName:"calico-kube-controllers-7db954c79f-", Namespace:"calico-system", SelfLink:"", UID:"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db954c79f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"", Pod:"calico-kube-controllers-7db954c79f-lcw2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali131541be6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:24.798376 containerd[1794]: 2025-05-10 00:02:24.760 [INFO][5057] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.66/32] ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.798376 containerd[1794]: 2025-05-10 00:02:24.760 [INFO][5057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali131541be6c6 ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.798376 containerd[1794]: 2025-05-10 00:02:24.769 [INFO][5057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.798376 containerd[1794]: 2025-05-10 00:02:24.772 [INFO][5057] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0", GenerateName:"calico-kube-controllers-7db954c79f-", Namespace:"calico-system", SelfLink:"", UID:"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db954c79f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972", Pod:"calico-kube-controllers-7db954c79f-lcw2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali131541be6c6", MAC:"c6:53:88:f0:98:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:24.798376 containerd[1794]: 2025-05-10 00:02:24.793 [INFO][5057] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972" Namespace="calico-system" Pod="calico-kube-controllers-7db954c79f-lcw2d" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:24.833526 systemd-networkd[1380]: calic9d2a2fa605: Link UP May 10 00:02:24.833664 systemd-networkd[1380]: calic9d2a2fa605: Gained carrier May 10 00:02:24.843937 containerd[1794]: time="2025-05-10T00:02:24.843053048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:24.843937 containerd[1794]: time="2025-05-10T00:02:24.843115448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:24.843937 containerd[1794]: time="2025-05-10T00:02:24.843131328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:24.843937 containerd[1794]: time="2025-05-10T00:02:24.843231048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.692 [INFO][5068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0 calico-apiserver-794797747f- calico-apiserver 0b77bf54-6b1a-44ba-a50d-01c24a8c08ab 765 0 2025-05-10 00:01:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794797747f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-4cc30cd86c calico-apiserver-794797747f-x94dk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9d2a2fa605 [] []}} ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.692 [INFO][5068] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.725 [INFO][5087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" HandleID="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.737 [INFO][5087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" HandleID="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003198f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-4cc30cd86c", "pod":"calico-apiserver-794797747f-x94dk", "timestamp":"2025-05-10 00:02:24.725932689 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4cc30cd86c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.737 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.757 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.757 [INFO][5087] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4cc30cd86c' May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.759 [INFO][5087] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.768 [INFO][5087] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.781 [INFO][5087] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.784 [INFO][5087] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.790 [INFO][5087] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.793 [INFO][5087] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.797 [INFO][5087] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432 May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.806 [INFO][5087] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.827 [INFO][5087] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.67/26] block=192.168.34.64/26 handle="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.827 [INFO][5087] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.67/26] handle="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.827 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:24.862112 containerd[1794]: 2025-05-10 00:02:24.827 [INFO][5087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.67/26] IPv6=[] ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" HandleID="k8s-pod-network.f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.863371 containerd[1794]: 2025-05-10 00:02:24.830 [INFO][5068] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"", Pod:"calico-apiserver-794797747f-x94dk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9d2a2fa605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:24.863371 containerd[1794]: 2025-05-10 00:02:24.830 [INFO][5068] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.67/32] ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.863371 containerd[1794]: 2025-05-10 00:02:24.830 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9d2a2fa605 ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.863371 containerd[1794]: 2025-05-10 00:02:24.833 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.863371 containerd[1794]: 2025-05-10 00:02:24.834 [INFO][5068] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432", Pod:"calico-apiserver-794797747f-x94dk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9d2a2fa605", MAC:"16:f5:54:bc:0a:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:24.863371 containerd[1794]: 2025-05-10 00:02:24.854 [INFO][5068] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-x94dk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:24.904347 containerd[1794]: time="2025-05-10T00:02:24.904225189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:24.904347 containerd[1794]: time="2025-05-10T00:02:24.904312309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:24.904738 containerd[1794]: time="2025-05-10T00:02:24.904328309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:24.904738 containerd[1794]: time="2025-05-10T00:02:24.904609869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:24.913641 containerd[1794]: time="2025-05-10T00:02:24.913593312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db954c79f-lcw2d,Uid:4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972\"" May 10 00:02:24.952367 containerd[1794]: time="2025-05-10T00:02:24.952323005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-x94dk,Uid:0b77bf54-6b1a-44ba-a50d-01c24a8c08ab,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432\"" May 10 00:02:25.043852 systemd[1]: run-netns-cni\x2d8ff87602\x2df18b\x2d334a\x2df474\x2d7299d318ff7b.mount: Deactivated successfully. May 10 00:02:25.460476 containerd[1794]: time="2025-05-10T00:02:25.459273094Z" level=info msg="StopPodSandbox for \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\"" May 10 00:02:25.465898 containerd[1794]: time="2025-05-10T00:02:25.463932456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:25.468300 containerd[1794]: time="2025-05-10T00:02:25.468268697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:02:25.472590 containerd[1794]: time="2025-05-10T00:02:25.472553539Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:25.479823 containerd[1794]: time="2025-05-10T00:02:25.479784661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:25.481123 containerd[1794]: time="2025-05-10T00:02:25.481091302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.354209254s" May 10 00:02:25.481247 containerd[1794]: time="2025-05-10T00:02:25.481230382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:02:25.484217 containerd[1794]: time="2025-05-10T00:02:25.484049463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 00:02:25.487086 containerd[1794]: time="2025-05-10T00:02:25.487057064Z" level=info msg="CreateContainer within sandbox \"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:02:25.548633 containerd[1794]: time="2025-05-10T00:02:25.547824644Z" level=info msg="CreateContainer within sandbox \"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"49db0dd9fec3fe5718377e57d7e0754209c5ceffbc93e650099048a2e379d9b0\"" May 10 00:02:25.553614 containerd[1794]: time="2025-05-10T00:02:25.553574566Z" level=info msg="StartContainer for \"49db0dd9fec3fe5718377e57d7e0754209c5ceffbc93e650099048a2e379d9b0\"" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.513 [INFO][5225] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.513 [INFO][5225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" iface="eth0" netns="/var/run/netns/cni-fb5ae7ed-7561-713f-183a-c831fc8530ed" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.514 [INFO][5225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" iface="eth0" netns="/var/run/netns/cni-fb5ae7ed-7561-713f-183a-c831fc8530ed" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.514 [INFO][5225] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" iface="eth0" netns="/var/run/netns/cni-fb5ae7ed-7561-713f-183a-c831fc8530ed" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.514 [INFO][5225] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.514 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.544 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.544 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.544 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.558 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.558 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.560 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:25.563855 containerd[1794]: 2025-05-10 00:02:25.561 [INFO][5225] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:25.567016 containerd[1794]: time="2025-05-10T00:02:25.566775890Z" level=info msg="TearDown network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\" successfully" May 10 00:02:25.567016 containerd[1794]: time="2025-05-10T00:02:25.566815010Z" level=info msg="StopPodSandbox for \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\" returns successfully" May 10 00:02:25.567632 systemd[1]: run-netns-cni\x2dfb5ae7ed\x2d7561\x2d713f\x2d183a\x2dc831fc8530ed.mount: Deactivated successfully. May 10 00:02:25.569015 containerd[1794]: time="2025-05-10T00:02:25.568987571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-2h685,Uid:0d72ed92-d321-47fa-9ed1-47ca14227fe8,Namespace:calico-apiserver,Attempt:1,}" May 10 00:02:25.617234 containerd[1794]: time="2025-05-10T00:02:25.617117427Z" level=info msg="StartContainer for \"49db0dd9fec3fe5718377e57d7e0754209c5ceffbc93e650099048a2e379d9b0\" returns successfully" May 10 00:02:25.739969 systemd-networkd[1380]: cali64b3b8ac7b5: Link UP May 10 00:02:25.740113 systemd-networkd[1380]: cali64b3b8ac7b5: Gained carrier May 10 00:02:25.745668 kubelet[3407]: I0510 00:02:25.745551 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7wwnd" podStartSLOduration=24.085766939 podStartE2EDuration="26.74551379s" podCreationTimestamp="2025-05-10 00:01:59 +0000 UTC" firstStartedPulling="2025-05-10 00:02:22.822671451 +0000 UTC m=+47.477809243" lastFinishedPulling="2025-05-10 00:02:25.482418262 +0000 UTC m=+50.137556094" observedRunningTime="2025-05-10 00:02:25.743217269 +0000 UTC m=+50.398355101" watchObservedRunningTime="2025-05-10 00:02:25.74551379 +0000 UTC m=+50.400651782" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.653 [INFO][5272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0 calico-apiserver-794797747f- calico-apiserver 0d72ed92-d321-47fa-9ed1-47ca14227fe8 780 0 2025-05-10 00:01:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794797747f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-4cc30cd86c calico-apiserver-794797747f-2h685 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64b3b8ac7b5 [] []}} ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.653 [INFO][5272] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.693 [INFO][5287] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" HandleID="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.704 [INFO][5287] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" HandleID="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-4cc30cd86c", "pod":"calico-apiserver-794797747f-2h685", "timestamp":"2025-05-10 00:02:25.693074653 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4cc30cd86c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.704 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.704 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.704 [INFO][5287] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4cc30cd86c' May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.706 [INFO][5287] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.709 [INFO][5287] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.713 [INFO][5287] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.715 [INFO][5287] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.718 [INFO][5287] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.718 [INFO][5287] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.719 [INFO][5287] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.725 [INFO][5287] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.734 [INFO][5287] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.68/26] block=192.168.34.64/26 handle="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.734 [INFO][5287] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.68/26] handle="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.734 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:25.766801 containerd[1794]: 2025-05-10 00:02:25.734 [INFO][5287] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.68/26] IPv6=[] ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" HandleID="k8s-pod-network.2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.769416 containerd[1794]: 2025-05-10 00:02:25.736 [INFO][5272] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d72ed92-d321-47fa-9ed1-47ca14227fe8", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"", Pod:"calico-apiserver-794797747f-2h685", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64b3b8ac7b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:25.769416 containerd[1794]: 2025-05-10 00:02:25.736 [INFO][5272] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.68/32] ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.769416 containerd[1794]: 2025-05-10 00:02:25.736 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64b3b8ac7b5 ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.769416 containerd[1794]: 2025-05-10 00:02:25.745 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.769416 containerd[1794]: 2025-05-10 00:02:25.747 [INFO][5272] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d72ed92-d321-47fa-9ed1-47ca14227fe8", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c", Pod:"calico-apiserver-794797747f-2h685", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64b3b8ac7b5", MAC:"ca:89:ef:78:36:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:25.769416 containerd[1794]: 2025-05-10 00:02:25.762 [INFO][5272] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c" Namespace="calico-apiserver" Pod="calico-apiserver-794797747f-2h685" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:25.919928 systemd-networkd[1380]: calic9d2a2fa605: Gained IPv6LL May 10 00:02:26.032037 containerd[1794]: time="2025-05-10T00:02:26.031171846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:26.032037 containerd[1794]: time="2025-05-10T00:02:26.031777926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:26.032037 containerd[1794]: time="2025-05-10T00:02:26.031836006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:26.032391 containerd[1794]: time="2025-05-10T00:02:26.032304206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:26.082036 containerd[1794]: time="2025-05-10T00:02:26.081971863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794797747f-2h685,Uid:0d72ed92-d321-47fa-9ed1-47ca14227fe8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c\"" May 10 00:02:26.432054 systemd-networkd[1380]: cali131541be6c6: Gained IPv6LL May 10 00:02:26.459079 containerd[1794]: time="2025-05-10T00:02:26.458991509Z" level=info msg="StopPodSandbox for \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\"" May 10 00:02:26.459803 containerd[1794]: time="2025-05-10T00:02:26.459610229Z" level=info msg="StopPodSandbox for \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\"" May 10 00:02:26.566341 kubelet[3407]: I0510 00:02:26.566311 3407 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:02:26.566690 kubelet[3407]: I0510 00:02:26.566536 3407 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.527 [INFO][5378] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.528 [INFO][5378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" iface="eth0" netns="/var/run/netns/cni-a747e4e2-f80f-09f4-a576-d067d7a5caa3" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.529 [INFO][5378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" iface="eth0" netns="/var/run/netns/cni-a747e4e2-f80f-09f4-a576-d067d7a5caa3" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.529 [INFO][5378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" iface="eth0" netns="/var/run/netns/cni-a747e4e2-f80f-09f4-a576-d067d7a5caa3" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.529 [INFO][5378] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.529 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.561 [INFO][5388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.561 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.561 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.575 [WARNING][5388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.575 [INFO][5388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.576 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:26.589028 containerd[1794]: 2025-05-10 00:02:26.583 [INFO][5378] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:26.596986 containerd[1794]: time="2025-05-10T00:02:26.596427435Z" level=info msg="TearDown network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\" successfully" May 10 00:02:26.596986 containerd[1794]: time="2025-05-10T00:02:26.596473075Z" level=info msg="StopPodSandbox for \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\" returns successfully" May 10 00:02:26.597918 containerd[1794]: time="2025-05-10T00:02:26.597536836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fkpjk,Uid:33a5d1c8-646f-487e-a061-b26667f1063e,Namespace:kube-system,Attempt:1,}" May 10 00:02:26.598578 systemd[1]: run-netns-cni\x2da747e4e2\x2df80f\x2d09f4\x2da576\x2dd067d7a5caa3.mount: Deactivated successfully. May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.533 [INFO][5370] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.533 [INFO][5370] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" iface="eth0" netns="/var/run/netns/cni-d8211d02-f1fa-2282-35d1-0d90c7ddae04" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.534 [INFO][5370] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" iface="eth0" netns="/var/run/netns/cni-d8211d02-f1fa-2282-35d1-0d90c7ddae04" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.534 [INFO][5370] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" iface="eth0" netns="/var/run/netns/cni-d8211d02-f1fa-2282-35d1-0d90c7ddae04" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.534 [INFO][5370] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.534 [INFO][5370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.568 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.569 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.577 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.592 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.592 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.602 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:26.607183 containerd[1794]: 2025-05-10 00:02:26.604 [INFO][5370] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:26.608658 containerd[1794]: time="2025-05-10T00:02:26.607376319Z" level=info msg="TearDown network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\" successfully" May 10 00:02:26.608658 containerd[1794]: time="2025-05-10T00:02:26.607407399Z" level=info msg="StopPodSandbox for \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\" returns successfully" May 10 00:02:26.610100 containerd[1794]: time="2025-05-10T00:02:26.609193839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jl5p6,Uid:ffcbd6aa-6c16-4f74-ab76-6cff432c2624,Namespace:kube-system,Attempt:1,}" May 10 00:02:26.611795 systemd[1]: run-netns-cni\x2dd8211d02\x2df1fa\x2d2282\x2d35d1\x2d0d90c7ddae04.mount: Deactivated successfully. May 10 00:02:26.844588 systemd-networkd[1380]: cali71586b11a1c: Link UP May 10 00:02:26.847313 systemd-networkd[1380]: cali71586b11a1c: Gained carrier May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.734 [INFO][5401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0 coredns-7db6d8ff4d- kube-system 33a5d1c8-646f-487e-a061-b26667f1063e 791 0 2025-05-10 00:01:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-4cc30cd86c coredns-7db6d8ff4d-fkpjk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71586b11a1c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.734 [INFO][5401] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.769 [INFO][5426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" HandleID="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.787 [INFO][5426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" HandleID="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030b520), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-4cc30cd86c", "pod":"coredns-7db6d8ff4d-fkpjk", "timestamp":"2025-05-10 00:02:26.769948013 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4cc30cd86c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.788 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.788 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.788 [INFO][5426] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4cc30cd86c' May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.791 [INFO][5426] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.797 [INFO][5426] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.803 [INFO][5426] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.806 [INFO][5426] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.810 [INFO][5426] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.810 [INFO][5426] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.812 [INFO][5426] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745 May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.818 [INFO][5426] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.834 [INFO][5426] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.69/26] block=192.168.34.64/26 handle="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.834 [INFO][5426] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.69/26] handle="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.834 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:26.880057 containerd[1794]: 2025-05-10 00:02:26.834 [INFO][5426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.69/26] IPv6=[] ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" HandleID="k8s-pod-network.afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.882465 containerd[1794]: 2025-05-10 00:02:26.836 [INFO][5401] cni-plugin/k8s.go 386: Populated endpoint ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"33a5d1c8-646f-487e-a061-b26667f1063e", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"", Pod:"coredns-7db6d8ff4d-fkpjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71586b11a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:26.882465 containerd[1794]: 2025-05-10 00:02:26.836 [INFO][5401] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.69/32] ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.882465 containerd[1794]: 2025-05-10 00:02:26.836 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71586b11a1c ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.882465 containerd[1794]: 2025-05-10 00:02:26.848 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.882465 containerd[1794]: 2025-05-10 00:02:26.848 [INFO][5401] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"33a5d1c8-646f-487e-a061-b26667f1063e", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745", Pod:"coredns-7db6d8ff4d-fkpjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71586b11a1c", MAC:"b2:04:9d:8b:3a:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:26.882465 containerd[1794]: 2025-05-10 00:02:26.877 [INFO][5401] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fkpjk" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:26.912342 systemd-networkd[1380]: cali48d2fe5df46: Link UP May 10 00:02:26.912701 systemd-networkd[1380]: cali48d2fe5df46: Gained carrier May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.749 [INFO][5411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0 coredns-7db6d8ff4d- kube-system ffcbd6aa-6c16-4f74-ab76-6cff432c2624 792 0 2025-05-10 00:01:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-4cc30cd86c coredns-7db6d8ff4d-jl5p6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48d2fe5df46 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.749 [INFO][5411] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.796 [INFO][5432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" HandleID="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.818 [INFO][5432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" HandleID="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031b390), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-4cc30cd86c", "pod":"coredns-7db6d8ff4d-jl5p6", "timestamp":"2025-05-10 00:02:26.796748702 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4cc30cd86c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.818 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.834 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.834 [INFO][5432] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4cc30cd86c' May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.839 [INFO][5432] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.853 [INFO][5432] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.862 [INFO][5432] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.870 [INFO][5432] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.876 [INFO][5432] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.877 [INFO][5432] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.879 [INFO][5432] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.886 [INFO][5432] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.900 [INFO][5432] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.70/26] block=192.168.34.64/26 handle="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.900 [INFO][5432] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.70/26] handle="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" host="ci-4081.3.3-n-4cc30cd86c" May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.900 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:26.942780 containerd[1794]: 2025-05-10 00:02:26.900 [INFO][5432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.70/26] IPv6=[] ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" HandleID="k8s-pod-network.eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.946005 containerd[1794]: 2025-05-10 00:02:26.906 [INFO][5411] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffcbd6aa-6c16-4f74-ab76-6cff432c2624", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"", Pod:"coredns-7db6d8ff4d-jl5p6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48d2fe5df46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:26.946005 containerd[1794]: 2025-05-10 00:02:26.906 [INFO][5411] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.70/32] ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.946005 containerd[1794]: 2025-05-10 00:02:26.906 [INFO][5411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48d2fe5df46 ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.946005 containerd[1794]: 2025-05-10 00:02:26.913 [INFO][5411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.946005 containerd[1794]: 2025-05-10 00:02:26.914 [INFO][5411] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffcbd6aa-6c16-4f74-ab76-6cff432c2624", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce", Pod:"coredns-7db6d8ff4d-jl5p6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48d2fe5df46", MAC:"12:00:98:0c:a7:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:26.946005 containerd[1794]: 2025-05-10 00:02:26.940 [INFO][5411] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jl5p6" WorkloadEndpoint="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:26.943899 systemd-networkd[1380]: cali64b3b8ac7b5: Gained IPv6LL May 10 00:02:26.957024 containerd[1794]: time="2025-05-10T00:02:26.956740916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:26.957024 containerd[1794]: time="2025-05-10T00:02:26.956815116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:26.957024 containerd[1794]: time="2025-05-10T00:02:26.956827876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:26.958078 containerd[1794]: time="2025-05-10T00:02:26.957193196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:27.008251 containerd[1794]: time="2025-05-10T00:02:27.008208093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fkpjk,Uid:33a5d1c8-646f-487e-a061-b26667f1063e,Namespace:kube-system,Attempt:1,} returns sandbox id \"afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745\"" May 10 00:02:27.019082 containerd[1794]: time="2025-05-10T00:02:27.018941417Z" level=info msg="CreateContainer within sandbox \"afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:02:27.098302 containerd[1794]: time="2025-05-10T00:02:27.098074883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:02:27.098302 containerd[1794]: time="2025-05-10T00:02:27.098138603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:02:27.098302 containerd[1794]: time="2025-05-10T00:02:27.098150923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:27.099333 containerd[1794]: time="2025-05-10T00:02:27.098909323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:02:27.118709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988785709.mount: Deactivated successfully. May 10 00:02:27.136732 containerd[1794]: time="2025-05-10T00:02:27.135530096Z" level=info msg="CreateContainer within sandbox \"afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ffa180dad2209edcd79a36137788226df05bf0e5d55f66f4cac28ac86c69a7d\"" May 10 00:02:27.137732 containerd[1794]: time="2025-05-10T00:02:27.137215896Z" level=info msg="StartContainer for \"6ffa180dad2209edcd79a36137788226df05bf0e5d55f66f4cac28ac86c69a7d\"" May 10 00:02:27.175125 containerd[1794]: time="2025-05-10T00:02:27.175089069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jl5p6,Uid:ffcbd6aa-6c16-4f74-ab76-6cff432c2624,Namespace:kube-system,Attempt:1,} returns sandbox id \"eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce\"" May 10 00:02:27.185320 containerd[1794]: time="2025-05-10T00:02:27.184968312Z" level=info msg="CreateContainer within sandbox \"eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:02:27.228193 containerd[1794]: time="2025-05-10T00:02:27.228106487Z" level=info msg="StartContainer for \"6ffa180dad2209edcd79a36137788226df05bf0e5d55f66f4cac28ac86c69a7d\" returns successfully" May 10 00:02:27.233377 containerd[1794]: time="2025-05-10T00:02:27.232932168Z" level=info msg="CreateContainer within sandbox \"eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43a0a5dd74852d502826925e1a40106db0b4d5eb93d4a34457aa602278496127\"" May 10 00:02:27.234756 containerd[1794]: time="2025-05-10T00:02:27.234131329Z" level=info msg="StartContainer for \"43a0a5dd74852d502826925e1a40106db0b4d5eb93d4a34457aa602278496127\"" May 10 00:02:27.324667 containerd[1794]: time="2025-05-10T00:02:27.324551319Z" level=info msg="StartContainer for \"43a0a5dd74852d502826925e1a40106db0b4d5eb93d4a34457aa602278496127\" returns successfully" May 10 00:02:27.695685 containerd[1794]: time="2025-05-10T00:02:27.695263883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:27.698868 containerd[1794]: time="2025-05-10T00:02:27.698672724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:02:27.702108 containerd[1794]: time="2025-05-10T00:02:27.702024205Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:27.706750 containerd[1794]: time="2025-05-10T00:02:27.706683167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:27.707596 containerd[1794]: time="2025-05-10T00:02:27.707487447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.223402304s" May 10 00:02:27.707596 containerd[1794]: time="2025-05-10T00:02:27.707519487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:02:27.709702 containerd[1794]: time="2025-05-10T00:02:27.709473608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:02:27.715996 containerd[1794]: time="2025-05-10T00:02:27.715963010Z" level=info msg="CreateContainer within sandbox \"9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:02:27.756224 kubelet[3407]: I0510 00:02:27.755370 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fkpjk" podStartSLOduration=38.755341103 podStartE2EDuration="38.755341103s" podCreationTimestamp="2025-05-10 00:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:02:27.754832543 +0000 UTC m=+52.409970375" watchObservedRunningTime="2025-05-10 00:02:27.755341103 +0000 UTC m=+52.410478935" May 10 00:02:27.761116 containerd[1794]: time="2025-05-10T00:02:27.761000265Z" level=info msg="CreateContainer within sandbox \"9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d10259deb132f1e52bf74a8e0577c060dd87426f848167d1f861da14812c9cd3\"" May 10 00:02:27.762454 containerd[1794]: time="2025-05-10T00:02:27.761816705Z" level=info msg="StartContainer for \"d10259deb132f1e52bf74a8e0577c060dd87426f848167d1f861da14812c9cd3\"" May 10 00:02:27.812964 kubelet[3407]: I0510 00:02:27.812888 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jl5p6" podStartSLOduration=38.812867763 podStartE2EDuration="38.812867763s" podCreationTimestamp="2025-05-10 00:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:02:27.780940832 +0000 UTC m=+52.436078664" watchObservedRunningTime="2025-05-10 00:02:27.812867763 +0000 UTC m=+52.468005595" May 10 00:02:27.856770 containerd[1794]: time="2025-05-10T00:02:27.856553057Z" level=info msg="StartContainer for \"d10259deb132f1e52bf74a8e0577c060dd87426f848167d1f861da14812c9cd3\" returns successfully" May 10 00:02:28.287994 systemd-networkd[1380]: cali71586b11a1c: Gained IPv6LL May 10 00:02:28.780741 kubelet[3407]: I0510 00:02:28.780580 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7db954c79f-lcw2d" podStartSLOduration=26.987649831 podStartE2EDuration="29.780562007s" podCreationTimestamp="2025-05-10 00:01:59 +0000 UTC" firstStartedPulling="2025-05-10 00:02:24.915484152 +0000 UTC m=+49.570621984" lastFinishedPulling="2025-05-10 00:02:27.708396368 +0000 UTC m=+52.363534160" observedRunningTime="2025-05-10 00:02:28.778820246 +0000 UTC m=+53.433958118" watchObservedRunningTime="2025-05-10 00:02:28.780562007 +0000 UTC m=+53.435699839" May 10 00:02:28.863883 systemd-networkd[1380]: cali48d2fe5df46: Gained IPv6LL May 10 00:02:30.246781 containerd[1794]: time="2025-05-10T00:02:30.246323203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:30.248610 containerd[1794]: time="2025-05-10T00:02:30.248487204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:02:30.253411 containerd[1794]: time="2025-05-10T00:02:30.253318205Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:30.259180 containerd[1794]: time="2025-05-10T00:02:30.259100807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:30.260171 containerd[1794]: time="2025-05-10T00:02:30.260050127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.550544839s" May 10 00:02:30.260171 containerd[1794]: time="2025-05-10T00:02:30.260085247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:02:30.262130 containerd[1794]: time="2025-05-10T00:02:30.261654368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:02:30.262918 containerd[1794]: time="2025-05-10T00:02:30.262882688Z" level=info msg="CreateContainer within sandbox \"f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:02:30.303015 containerd[1794]: time="2025-05-10T00:02:30.302971180Z" level=info msg="CreateContainer within sandbox \"f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"92bc772d21997c6242238c1d540b73c899574df1c9f471c032273dc2fe8d470d\"" May 10 00:02:30.303982 containerd[1794]: time="2025-05-10T00:02:30.303921861Z" level=info msg="StartContainer for \"92bc772d21997c6242238c1d540b73c899574df1c9f471c032273dc2fe8d470d\"" May 10 00:02:30.381767 containerd[1794]: time="2025-05-10T00:02:30.381703004Z" level=info msg="StartContainer for \"92bc772d21997c6242238c1d540b73c899574df1c9f471c032273dc2fe8d470d\" returns successfully" May 10 00:02:30.607812 containerd[1794]: time="2025-05-10T00:02:30.607069071Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:02:30.611123 containerd[1794]: time="2025-05-10T00:02:30.610615432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 00:02:30.612189 containerd[1794]: time="2025-05-10T00:02:30.612159672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 350.474064ms" May 10 00:02:30.612232 containerd[1794]: time="2025-05-10T00:02:30.612205712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:02:30.615664 containerd[1794]: time="2025-05-10T00:02:30.615628113Z" level=info msg="CreateContainer within sandbox \"2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:02:30.660580 containerd[1794]: time="2025-05-10T00:02:30.660457087Z" level=info msg="CreateContainer within sandbox \"2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfcc4b2e79fcf9a2acb3a90ad258679065feb9117c32dc6ee3bc057f7e65493e\"" May 10 00:02:30.661985 containerd[1794]: time="2025-05-10T00:02:30.661951327Z" level=info msg="StartContainer for \"cfcc4b2e79fcf9a2acb3a90ad258679065feb9117c32dc6ee3bc057f7e65493e\"" May 10 00:02:30.736312 containerd[1794]: time="2025-05-10T00:02:30.736268389Z" level=info msg="StartContainer for \"cfcc4b2e79fcf9a2acb3a90ad258679065feb9117c32dc6ee3bc057f7e65493e\" returns successfully" May 10 00:02:30.803470 kubelet[3407]: I0510 00:02:30.803397 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794797747f-2h685" podStartSLOduration=28.273907199 podStartE2EDuration="32.803359089s" podCreationTimestamp="2025-05-10 00:01:58 +0000 UTC" firstStartedPulling="2025-05-10 00:02:26.083495463 +0000 UTC m=+50.738633295" lastFinishedPulling="2025-05-10 00:02:30.612947353 +0000 UTC m=+55.268085185" observedRunningTime="2025-05-10 00:02:30.801511329 +0000 UTC m=+55.456649161" watchObservedRunningTime="2025-05-10 00:02:30.803359089 +0000 UTC m=+55.458496881" May 10 00:02:30.805845 kubelet[3407]: I0510 00:02:30.805787 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794797747f-x94dk" podStartSLOduration=27.498405887 podStartE2EDuration="32.80577437s" podCreationTimestamp="2025-05-10 00:01:58 +0000 UTC" firstStartedPulling="2025-05-10 00:02:24.953603645 +0000 UTC m=+49.608741477" lastFinishedPulling="2025-05-10 00:02:30.260972128 +0000 UTC m=+54.916109960" observedRunningTime="2025-05-10 00:02:30.782933363 +0000 UTC m=+55.438071315" watchObservedRunningTime="2025-05-10 00:02:30.80577437 +0000 UTC m=+55.460912162" May 10 00:02:31.768801 kubelet[3407]: I0510 00:02:31.768748 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:02:31.768801 kubelet[3407]: I0510 00:02:31.768752 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:02:35.476332 containerd[1794]: time="2025-05-10T00:02:35.476234961Z" level=info msg="StopPodSandbox for \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\"" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.513 [WARNING][5801] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17", Pod:"csi-node-driver-7wwnd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliceec49c8f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.513 [INFO][5801] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.513 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" iface="eth0" netns="" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.513 [INFO][5801] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.513 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.537 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.537 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.537 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.548 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.548 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.550 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:35.556066 containerd[1794]: 2025-05-10 00:02:35.552 [INFO][5801] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.556679 containerd[1794]: time="2025-05-10T00:02:35.556548705Z" level=info msg="TearDown network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\" successfully" May 10 00:02:35.556679 containerd[1794]: time="2025-05-10T00:02:35.556580625Z" level=info msg="StopPodSandbox for \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\" returns successfully" May 10 00:02:35.557442 containerd[1794]: time="2025-05-10T00:02:35.557177025Z" level=info msg="RemovePodSandbox for \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\"" May 10 00:02:35.557442 containerd[1794]: time="2025-05-10T00:02:35.557205665Z" level=info msg="Forcibly stopping sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\"" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.597 [WARNING][5826] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a32e4269-f2a9-43d8-bcf4-45ed0c55b9eb", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"d42c78d4a8040f1c53203fe1ffd78b4f3abbf3b7c249d056b58adbd3f334ce17", Pod:"csi-node-driver-7wwnd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliceec49c8f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.597 [INFO][5826] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.598 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" iface="eth0" netns="" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.598 [INFO][5826] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.598 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.616 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.616 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.616 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.625 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.625 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" HandleID="k8s-pod-network.f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-csi--node--driver--7wwnd-eth0" May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.626 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:35.629993 containerd[1794]: 2025-05-10 00:02:35.628 [INFO][5826] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc" May 10 00:02:35.630441 containerd[1794]: time="2025-05-10T00:02:35.630031807Z" level=info msg="TearDown network for sandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\" successfully" May 10 00:02:35.643983 containerd[1794]: time="2025-05-10T00:02:35.643934331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:02:35.644105 containerd[1794]: time="2025-05-10T00:02:35.644037651Z" level=info msg="RemovePodSandbox \"f1ab57c5e4a41bae35e2ee395e6880ceca136e49568ea5695eb2d817370516cc\" returns successfully" May 10 00:02:35.644759 containerd[1794]: time="2025-05-10T00:02:35.644713011Z" level=info msg="StopPodSandbox for \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\"" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.681 [WARNING][5852] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0", GenerateName:"calico-kube-controllers-7db954c79f-", Namespace:"calico-system", SelfLink:"", UID:"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db954c79f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972", Pod:"calico-kube-controllers-7db954c79f-lcw2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali131541be6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.681 [INFO][5852] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.682 [INFO][5852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" iface="eth0" netns="" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.682 [INFO][5852] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.682 [INFO][5852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.719 [INFO][5859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.719 [INFO][5859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.719 [INFO][5859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.730 [WARNING][5859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.730 [INFO][5859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.731 [INFO][5859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:35.734989 containerd[1794]: 2025-05-10 00:02:35.733 [INFO][5852] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.734989 containerd[1794]: time="2025-05-10T00:02:35.734971318Z" level=info msg="TearDown network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\" successfully" May 10 00:02:35.735412 containerd[1794]: time="2025-05-10T00:02:35.735004558Z" level=info msg="StopPodSandbox for \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\" returns successfully" May 10 00:02:35.736367 containerd[1794]: time="2025-05-10T00:02:35.736329119Z" level=info msg="RemovePodSandbox for \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\"" May 10 00:02:35.736367 containerd[1794]: time="2025-05-10T00:02:35.736363639Z" level=info msg="Forcibly stopping sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\"" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.772 [WARNING][5877] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0", GenerateName:"calico-kube-controllers-7db954c79f-", Namespace:"calico-system", SelfLink:"", UID:"4b7a7e52-bcf8-45a1-a489-e1af4d3bb92d", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db954c79f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"9ba79672a1ef3a108785ace47fd57b18a600da1ee23cd0cf21d19748dc9df972", Pod:"calico-kube-controllers-7db954c79f-lcw2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali131541be6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.773 [INFO][5877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.773 [INFO][5877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" iface="eth0" netns="" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.773 [INFO][5877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.773 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.797 [INFO][5884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.798 [INFO][5884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.798 [INFO][5884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.806 [WARNING][5884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.807 [INFO][5884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" HandleID="k8s-pod-network.73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--kube--controllers--7db954c79f--lcw2d-eth0" May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.810 [INFO][5884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:35.813254 containerd[1794]: 2025-05-10 00:02:35.811 [INFO][5877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311" May 10 00:02:35.813842 containerd[1794]: time="2025-05-10T00:02:35.813293742Z" level=info msg="TearDown network for sandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\" successfully" May 10 00:02:35.824517 containerd[1794]: time="2025-05-10T00:02:35.824469905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:02:35.824920 containerd[1794]: time="2025-05-10T00:02:35.824550025Z" level=info msg="RemovePodSandbox \"73d67d21d67f90bfc6b69cfbb7e7686626442d7033996667bc525fef108a6311\" returns successfully" May 10 00:02:35.825632 containerd[1794]: time="2025-05-10T00:02:35.825326185Z" level=info msg="StopPodSandbox for \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\"" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.862 [WARNING][5902] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffcbd6aa-6c16-4f74-ab76-6cff432c2624", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce", Pod:"coredns-7db6d8ff4d-jl5p6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48d2fe5df46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.864 [INFO][5902] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.864 [INFO][5902] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" iface="eth0" netns="" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.864 [INFO][5902] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.864 [INFO][5902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.889 [INFO][5909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.889 [INFO][5909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.889 [INFO][5909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.898 [WARNING][5909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.898 [INFO][5909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.899 [INFO][5909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:35.902956 containerd[1794]: 2025-05-10 00:02:35.901 [INFO][5902] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.903861 containerd[1794]: time="2025-05-10T00:02:35.903006568Z" level=info msg="TearDown network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\" successfully" May 10 00:02:35.903861 containerd[1794]: time="2025-05-10T00:02:35.903032608Z" level=info msg="StopPodSandbox for \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\" returns successfully" May 10 00:02:35.904296 containerd[1794]: time="2025-05-10T00:02:35.903997969Z" level=info msg="RemovePodSandbox for \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\"" May 10 00:02:35.904296 containerd[1794]: time="2025-05-10T00:02:35.904032329Z" level=info msg="Forcibly stopping sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\"" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.948 [WARNING][5927] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffcbd6aa-6c16-4f74-ab76-6cff432c2624", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"eea8ce83ec774287321c9843c4540843b30a17471d8ed88e2b71d1e11f3107ce", Pod:"coredns-7db6d8ff4d-jl5p6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48d2fe5df46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.949 [INFO][5927] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.949 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" iface="eth0" netns="" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.949 [INFO][5927] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.949 [INFO][5927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.967 [INFO][5934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.968 [INFO][5934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.968 [INFO][5934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.976 [WARNING][5934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.976 [INFO][5934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" HandleID="k8s-pod-network.994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--jl5p6-eth0" May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.977 [INFO][5934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:35.980581 containerd[1794]: 2025-05-10 00:02:35.979 [INFO][5927] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275" May 10 00:02:35.981599 containerd[1794]: time="2025-05-10T00:02:35.981103312Z" level=info msg="TearDown network for sandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\" successfully" May 10 00:02:35.989331 containerd[1794]: time="2025-05-10T00:02:35.989181634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:02:35.989331 containerd[1794]: time="2025-05-10T00:02:35.989260714Z" level=info msg="RemovePodSandbox \"994af774e9804eaa93099133cf489c684b6d7e5d69419e52ae98b4823a16e275\" returns successfully" May 10 00:02:35.990317 containerd[1794]: time="2025-05-10T00:02:35.990106674Z" level=info msg="StopPodSandbox for \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\"" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.029 [WARNING][5952] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d72ed92-d321-47fa-9ed1-47ca14227fe8", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c", Pod:"calico-apiserver-794797747f-2h685", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64b3b8ac7b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.029 [INFO][5952] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.029 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" iface="eth0" netns="" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.029 [INFO][5952] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.029 [INFO][5952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.050 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.050 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.050 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.059 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.059 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.060 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:36.064126 containerd[1794]: 2025-05-10 00:02:36.062 [INFO][5952] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.065030 containerd[1794]: time="2025-05-10T00:02:36.064244816Z" level=info msg="TearDown network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\" successfully" May 10 00:02:36.065030 containerd[1794]: time="2025-05-10T00:02:36.064674296Z" level=info msg="StopPodSandbox for \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\" returns successfully" May 10 00:02:36.065649 containerd[1794]: time="2025-05-10T00:02:36.065309337Z" level=info msg="RemovePodSandbox for \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\"" May 10 00:02:36.065649 containerd[1794]: time="2025-05-10T00:02:36.065341457Z" level=info msg="Forcibly stopping sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\"" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.106 [WARNING][5978] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d72ed92-d321-47fa-9ed1-47ca14227fe8", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"2b3203d157e54cfa3fa668edf66e55f298e36fa7b234c98f314265adbdd04d5c", Pod:"calico-apiserver-794797747f-2h685", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64b3b8ac7b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.106 [INFO][5978] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.106 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" iface="eth0" netns="" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.106 [INFO][5978] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.106 [INFO][5978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.127 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.128 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.128 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.137 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.138 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" HandleID="k8s-pod-network.83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--2h685-eth0" May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.139 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:36.142408 containerd[1794]: 2025-05-10 00:02:36.140 [INFO][5978] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b" May 10 00:02:36.143032 containerd[1794]: time="2025-05-10T00:02:36.142442920Z" level=info msg="TearDown network for sandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\" successfully" May 10 00:02:36.160020 containerd[1794]: time="2025-05-10T00:02:36.159582205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:02:36.160020 containerd[1794]: time="2025-05-10T00:02:36.159660885Z" level=info msg="RemovePodSandbox \"83c7089ad2df8fee195c5851348b1b0689265db7babad3b7f293202efe131c9b\" returns successfully" May 10 00:02:36.160763 containerd[1794]: time="2025-05-10T00:02:36.160346605Z" level=info msg="StopPodSandbox for \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\"" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.210 [WARNING][6003] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432", Pod:"calico-apiserver-794797747f-x94dk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9d2a2fa605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.210 [INFO][6003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.210 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" iface="eth0" netns="" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.210 [INFO][6003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.210 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.233 [INFO][6010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.233 [INFO][6010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.233 [INFO][6010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.242 [WARNING][6010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.243 [INFO][6010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.244 [INFO][6010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:36.247609 containerd[1794]: 2025-05-10 00:02:36.246 [INFO][6003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.248305 containerd[1794]: time="2025-05-10T00:02:36.248140391Z" level=info msg="TearDown network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\" successfully" May 10 00:02:36.248305 containerd[1794]: time="2025-05-10T00:02:36.248192871Z" level=info msg="StopPodSandbox for \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\" returns successfully" May 10 00:02:36.249069 containerd[1794]: time="2025-05-10T00:02:36.248693871Z" level=info msg="RemovePodSandbox for \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\"" May 10 00:02:36.249340 containerd[1794]: time="2025-05-10T00:02:36.249208791Z" level=info msg="Forcibly stopping sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\"" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.290 [WARNING][6028] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0", GenerateName:"calico-apiserver-794797747f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b77bf54-6b1a-44ba-a50d-01c24a8c08ab", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794797747f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"f488fc01d1c5fa3004035dd7ec8607108622653f8307a4ddf0db0070eda39432", Pod:"calico-apiserver-794797747f-x94dk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9d2a2fa605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.290 [INFO][6028] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.290 [INFO][6028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" iface="eth0" netns="" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.290 [INFO][6028] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.290 [INFO][6028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.312 [INFO][6035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.312 [INFO][6035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.312 [INFO][6035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.321 [WARNING][6035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.321 [INFO][6035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" HandleID="k8s-pod-network.500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-calico--apiserver--794797747f--x94dk-eth0" May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.323 [INFO][6035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:36.326307 containerd[1794]: 2025-05-10 00:02:36.324 [INFO][6028] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f" May 10 00:02:36.326856 containerd[1794]: time="2025-05-10T00:02:36.326352094Z" level=info msg="TearDown network for sandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\" successfully" May 10 00:02:36.336123 containerd[1794]: time="2025-05-10T00:02:36.336071857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:02:36.336282 containerd[1794]: time="2025-05-10T00:02:36.336148737Z" level=info msg="RemovePodSandbox \"500f30dbb582c8fb957534adfdae91da89e72b19f49becfe25f3efcc06f29b3f\" returns successfully" May 10 00:02:36.336859 containerd[1794]: time="2025-05-10T00:02:36.336556417Z" level=info msg="StopPodSandbox for \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\"" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.380 [WARNING][6054] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"33a5d1c8-646f-487e-a061-b26667f1063e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745", Pod:"coredns-7db6d8ff4d-fkpjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71586b11a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.380 [INFO][6054] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.380 [INFO][6054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" iface="eth0" netns="" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.380 [INFO][6054] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.380 [INFO][6054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.401 [INFO][6061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.401 [INFO][6061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.402 [INFO][6061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.410 [WARNING][6061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.411 [INFO][6061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.412 [INFO][6061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:36.415945 containerd[1794]: 2025-05-10 00:02:36.414 [INFO][6054] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.416519 containerd[1794]: time="2025-05-10T00:02:36.415986601Z" level=info msg="TearDown network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\" successfully" May 10 00:02:36.416519 containerd[1794]: time="2025-05-10T00:02:36.416014521Z" level=info msg="StopPodSandbox for \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\" returns successfully" May 10 00:02:36.417356 containerd[1794]: time="2025-05-10T00:02:36.417038001Z" level=info msg="RemovePodSandbox for \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\"" May 10 00:02:36.417356 containerd[1794]: time="2025-05-10T00:02:36.417072441Z" level=info msg="Forcibly stopping sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\"" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.469 [WARNING][6079] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"33a5d1c8-646f-487e-a061-b26667f1063e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4cc30cd86c", ContainerID:"afa86be09cf0acdd7726dbe1e79598c0e54d6b9f8aa8841ce50d890adeceb745", Pod:"coredns-7db6d8ff4d-fkpjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71586b11a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.469 [INFO][6079] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.469 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" iface="eth0" netns="" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.469 [INFO][6079] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.469 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.489 [INFO][6086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.489 [INFO][6086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.490 [INFO][6086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.498 [WARNING][6086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.498 [INFO][6086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" HandleID="k8s-pod-network.cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" Workload="ci--4081.3.3--n--4cc30cd86c-k8s-coredns--7db6d8ff4d--fkpjk-eth0" May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.500 [INFO][6086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:02:36.502819 containerd[1794]: 2025-05-10 00:02:36.501 [INFO][6079] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e" May 10 00:02:36.502819 containerd[1794]: time="2025-05-10T00:02:36.502773587Z" level=info msg="TearDown network for sandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\" successfully" May 10 00:02:36.514386 containerd[1794]: time="2025-05-10T00:02:36.514328190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:02:36.514506 containerd[1794]: time="2025-05-10T00:02:36.514414710Z" level=info msg="RemovePodSandbox \"cb472749e6be875706319f89158d207a52ec0451e50c8dc9490a6e4e226e595e\" returns successfully" May 10 00:03:05.681542 kubelet[3407]: I0510 00:03:05.681432 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:03:08.341113 kubelet[3407]: I0510 00:03:08.340465 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:03:43.381048 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:39982.service - OpenSSH per-connection server daemon (10.200.16.10:39982). May 10 00:03:43.828731 sshd[6271]: Accepted publickey for core from 10.200.16.10 port 39982 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:03:43.830970 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:03:43.835379 systemd-logind[1765]: New session 10 of user core. May 10 00:03:43.843115 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:03:44.243298 sshd[6271]: pam_unix(sshd:session): session closed for user core May 10 00:03:44.247837 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:39982.service: Deactivated successfully. May 10 00:03:44.251114 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:03:44.251662 systemd-logind[1765]: Session 10 logged out. Waiting for processes to exit. May 10 00:03:44.253568 systemd-logind[1765]: Removed session 10. May 10 00:03:49.322004 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:48558.service - OpenSSH per-connection server daemon (10.200.16.10:48558). May 10 00:03:49.769852 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 48558 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:03:49.771497 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:03:49.776436 systemd-logind[1765]: New session 11 of user core. May 10 00:03:49.783291 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:03:50.158971 sshd[6288]: pam_unix(sshd:session): session closed for user core May 10 00:03:50.162036 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:48558.service: Deactivated successfully. May 10 00:03:50.165929 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:03:50.167160 systemd-logind[1765]: Session 11 logged out. Waiting for processes to exit. May 10 00:03:50.168184 systemd-logind[1765]: Removed session 11. May 10 00:03:55.243951 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:48572.service - OpenSSH per-connection server daemon (10.200.16.10:48572). May 10 00:03:55.718985 sshd[6310]: Accepted publickey for core from 10.200.16.10 port 48572 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:03:55.719758 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:03:55.723630 systemd-logind[1765]: New session 12 of user core. May 10 00:03:55.733234 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:03:56.144695 sshd[6310]: pam_unix(sshd:session): session closed for user core May 10 00:03:56.148629 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:48572.service: Deactivated successfully. May 10 00:03:56.151591 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:03:56.152868 systemd-logind[1765]: Session 12 logged out. Waiting for processes to exit. May 10 00:03:56.154704 systemd-logind[1765]: Removed session 12. May 10 00:04:01.228056 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:52424.service - OpenSSH per-connection server daemon (10.200.16.10:52424). May 10 00:04:01.707191 sshd[6337]: Accepted publickey for core from 10.200.16.10 port 52424 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:01.709330 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:01.715155 systemd-logind[1765]: New session 13 of user core. May 10 00:04:01.719109 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:04:02.120369 sshd[6337]: pam_unix(sshd:session): session closed for user core May 10 00:04:02.124439 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:52424.service: Deactivated successfully. May 10 00:04:02.127053 systemd-logind[1765]: Session 13 logged out. Waiting for processes to exit. May 10 00:04:02.127562 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:04:02.129610 systemd-logind[1765]: Removed session 13. May 10 00:04:02.199027 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:52434.service - OpenSSH per-connection server daemon (10.200.16.10:52434). May 10 00:04:02.641770 sshd[6371]: Accepted publickey for core from 10.200.16.10 port 52434 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:02.643639 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:02.650694 systemd-logind[1765]: New session 14 of user core. May 10 00:04:02.655082 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:04:03.069190 sshd[6371]: pam_unix(sshd:session): session closed for user core May 10 00:04:03.076484 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:52434.service: Deactivated successfully. May 10 00:04:03.079586 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:04:03.080532 systemd-logind[1765]: Session 14 logged out. Waiting for processes to exit. May 10 00:04:03.081709 systemd-logind[1765]: Removed session 14. May 10 00:04:03.154198 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:52450.service - OpenSSH per-connection server daemon (10.200.16.10:52450). May 10 00:04:03.611120 sshd[6386]: Accepted publickey for core from 10.200.16.10 port 52450 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:03.612705 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:03.620284 systemd-logind[1765]: New session 15 of user core. May 10 00:04:03.627043 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:04:04.014970 sshd[6386]: pam_unix(sshd:session): session closed for user core May 10 00:04:04.019458 systemd-logind[1765]: Session 15 logged out. Waiting for processes to exit. May 10 00:04:04.022479 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:52450.service: Deactivated successfully. May 10 00:04:04.024061 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:04:04.029549 systemd-logind[1765]: Removed session 15. May 10 00:04:09.105988 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:59176.service - OpenSSH per-connection server daemon (10.200.16.10:59176). May 10 00:04:09.586493 sshd[6422]: Accepted publickey for core from 10.200.16.10 port 59176 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:09.587951 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:09.593627 systemd-logind[1765]: New session 16 of user core. May 10 00:04:09.597019 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:04:09.962463 systemd[1]: run-containerd-runc-k8s.io-d10259deb132f1e52bf74a8e0577c060dd87426f848167d1f861da14812c9cd3-runc.EGtANz.mount: Deactivated successfully. May 10 00:04:09.996527 sshd[6422]: pam_unix(sshd:session): session closed for user core May 10 00:04:10.000907 systemd-logind[1765]: Session 16 logged out. Waiting for processes to exit. May 10 00:04:10.001370 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:59176.service: Deactivated successfully. May 10 00:04:10.005170 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:04:10.006236 systemd-logind[1765]: Removed session 16. May 10 00:04:10.075057 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:59186.service - OpenSSH per-connection server daemon (10.200.16.10:59186). May 10 00:04:10.526591 sshd[6454]: Accepted publickey for core from 10.200.16.10 port 59186 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:10.528268 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:10.533012 systemd-logind[1765]: New session 17 of user core. May 10 00:04:10.538041 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:04:11.030688 sshd[6454]: pam_unix(sshd:session): session closed for user core May 10 00:04:11.034012 systemd-logind[1765]: Session 17 logged out. Waiting for processes to exit. May 10 00:04:11.034973 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:59186.service: Deactivated successfully. May 10 00:04:11.040315 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:04:11.041081 systemd-logind[1765]: Removed session 17. May 10 00:04:11.112006 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:59196.service - OpenSSH per-connection server daemon (10.200.16.10:59196). May 10 00:04:11.588196 sshd[6466]: Accepted publickey for core from 10.200.16.10 port 59196 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:11.589626 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:11.593859 systemd-logind[1765]: New session 18 of user core. May 10 00:04:11.600116 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:04:13.692446 sshd[6466]: pam_unix(sshd:session): session closed for user core May 10 00:04:13.696569 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:59196.service: Deactivated successfully. May 10 00:04:13.700471 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:04:13.701691 systemd-logind[1765]: Session 18 logged out. Waiting for processes to exit. May 10 00:04:13.702785 systemd-logind[1765]: Removed session 18. May 10 00:04:13.779979 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:59202.service - OpenSSH per-connection server daemon (10.200.16.10:59202). May 10 00:04:14.266519 sshd[6485]: Accepted publickey for core from 10.200.16.10 port 59202 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:14.268172 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:14.272437 systemd-logind[1765]: New session 19 of user core. May 10 00:04:14.283147 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:04:14.802986 sshd[6485]: pam_unix(sshd:session): session closed for user core May 10 00:04:14.805906 systemd-logind[1765]: Session 19 logged out. Waiting for processes to exit. May 10 00:04:14.806714 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:59202.service: Deactivated successfully. May 10 00:04:14.811676 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:04:14.813193 systemd-logind[1765]: Removed session 19. May 10 00:04:14.887297 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:59208.service - OpenSSH per-connection server daemon (10.200.16.10:59208). May 10 00:04:15.360370 sshd[6497]: Accepted publickey for core from 10.200.16.10 port 59208 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:15.362148 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:15.366634 systemd-logind[1765]: New session 20 of user core. May 10 00:04:15.371057 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:04:15.768816 sshd[6497]: pam_unix(sshd:session): session closed for user core May 10 00:04:15.772158 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:59208.service: Deactivated successfully. May 10 00:04:15.775258 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:04:15.775472 systemd-logind[1765]: Session 20 logged out. Waiting for processes to exit. May 10 00:04:15.777646 systemd-logind[1765]: Removed session 20. May 10 00:04:20.848013 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:42168.service - OpenSSH per-connection server daemon (10.200.16.10:42168). May 10 00:04:21.296829 sshd[6515]: Accepted publickey for core from 10.200.16.10 port 42168 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:21.298236 sshd[6515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:21.302673 systemd-logind[1765]: New session 21 of user core. May 10 00:04:21.307004 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:04:21.681586 sshd[6515]: pam_unix(sshd:session): session closed for user core May 10 00:04:21.684875 systemd-logind[1765]: Session 21 logged out. Waiting for processes to exit. May 10 00:04:21.686025 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:42168.service: Deactivated successfully. May 10 00:04:21.690534 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:04:21.692532 systemd-logind[1765]: Removed session 21. May 10 00:04:26.760992 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:42178.service - OpenSSH per-connection server daemon (10.200.16.10:42178). May 10 00:04:27.204909 sshd[6530]: Accepted publickey for core from 10.200.16.10 port 42178 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:27.205957 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:27.211011 systemd-logind[1765]: New session 22 of user core. May 10 00:04:27.216962 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:04:27.633186 sshd[6530]: pam_unix(sshd:session): session closed for user core May 10 00:04:27.641256 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:42178.service: Deactivated successfully. May 10 00:04:27.649990 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:04:27.654092 systemd-logind[1765]: Session 22 logged out. Waiting for processes to exit. May 10 00:04:27.658366 systemd-logind[1765]: Removed session 22. May 10 00:04:32.708984 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:37418.service - OpenSSH per-connection server daemon (10.200.16.10:37418). May 10 00:04:33.156711 sshd[6544]: Accepted publickey for core from 10.200.16.10 port 37418 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:33.158226 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:33.162268 systemd-logind[1765]: New session 23 of user core. May 10 00:04:33.171087 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:04:33.543890 sshd[6544]: pam_unix(sshd:session): session closed for user core May 10 00:04:33.547540 systemd-logind[1765]: Session 23 logged out. Waiting for processes to exit. May 10 00:04:33.548277 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:37418.service: Deactivated successfully. May 10 00:04:33.552893 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:04:33.554341 systemd-logind[1765]: Removed session 23. May 10 00:04:38.622975 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:37430.service - OpenSSH per-connection server daemon (10.200.16.10:37430). May 10 00:04:39.070123 sshd[6582]: Accepted publickey for core from 10.200.16.10 port 37430 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:39.071644 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:39.075869 systemd-logind[1765]: New session 24 of user core. May 10 00:04:39.081993 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:04:39.453652 sshd[6582]: pam_unix(sshd:session): session closed for user core May 10 00:04:39.457596 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:37430.service: Deactivated successfully. May 10 00:04:39.461479 systemd-logind[1765]: Session 24 logged out. Waiting for processes to exit. May 10 00:04:39.463063 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:04:39.464227 systemd-logind[1765]: Removed session 24. May 10 00:04:44.530985 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:44436.service - OpenSSH per-connection server daemon (10.200.16.10:44436). May 10 00:04:44.975700 sshd[6615]: Accepted publickey for core from 10.200.16.10 port 44436 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:44.977143 sshd[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:44.981372 systemd-logind[1765]: New session 25 of user core. May 10 00:04:44.988011 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:04:45.376885 sshd[6615]: pam_unix(sshd:session): session closed for user core May 10 00:04:45.381055 systemd-logind[1765]: Session 25 logged out. Waiting for processes to exit. May 10 00:04:45.381702 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:44436.service: Deactivated successfully. May 10 00:04:45.386293 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:04:45.387586 systemd-logind[1765]: Removed session 25. May 10 00:04:50.457687 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:54504.service - OpenSSH per-connection server daemon (10.200.16.10:54504). May 10 00:04:50.904060 sshd[6628]: Accepted publickey for core from 10.200.16.10 port 54504 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:50.905463 sshd[6628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:50.909506 systemd-logind[1765]: New session 26 of user core. May 10 00:04:50.918064 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 00:04:51.303982 sshd[6628]: pam_unix(sshd:session): session closed for user core May 10 00:04:51.308376 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:54504.service: Deactivated successfully. May 10 00:04:51.313001 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:04:51.315187 systemd-logind[1765]: Session 26 logged out. Waiting for processes to exit. May 10 00:04:51.316252 systemd-logind[1765]: Removed session 26. May 10 00:04:56.389091 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:54508.service - OpenSSH per-connection server daemon (10.200.16.10:54508). May 10 00:04:56.870069 sshd[6644]: Accepted publickey for core from 10.200.16.10 port 54508 ssh2: RSA SHA256:DOtkdUDP5mb6MUY5b8/hpUG4hLvcPfUKFP/aFo/CwMA May 10 00:04:56.871451 sshd[6644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:04:56.875983 systemd-logind[1765]: New session 27 of user core. May 10 00:04:56.881958 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 00:04:57.280481 sshd[6644]: pam_unix(sshd:session): session closed for user core May 10 00:04:57.284629 systemd-logind[1765]: Session 27 logged out. Waiting for processes to exit. May 10 00:04:57.285015 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:54508.service: Deactivated successfully. May 10 00:04:57.289330 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:04:57.290900 systemd-logind[1765]: Removed session 27.