Apr 30 12:33:34.401144 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 12:33:34.401172 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Apr 29 22:28:35 -00 2025 Apr 30 12:33:34.401181 kernel: KASLR enabled Apr 30 12:33:34.401187 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 30 12:33:34.401195 kernel: printk: bootconsole [pl11] enabled Apr 30 12:33:34.401200 kernel: efi: EFI v2.7 by EDK II Apr 30 12:33:34.401208 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Apr 30 12:33:34.401214 kernel: random: crng init done Apr 30 12:33:34.403282 kernel: secureboot: Secure boot disabled Apr 30 12:33:34.403294 kernel: ACPI: Early table checksum verification disabled Apr 30 12:33:34.403301 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 30 12:33:34.403308 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403315 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403328 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 30 12:33:34.403336 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403343 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403349 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403359 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403365 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403372 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403379 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 30 12:33:34.403386 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:33:34.403393 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 30 12:33:34.403399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 30 12:33:34.403406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 30 12:33:34.403412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 30 12:33:34.403419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 30 12:33:34.403426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 30 12:33:34.403435 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 30 12:33:34.403442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 30 12:33:34.403448 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 30 12:33:34.403455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 30 12:33:34.403462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 30 12:33:34.403469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 30 12:33:34.403475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 30 12:33:34.403482 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 30 12:33:34.403489 kernel: Zone ranges: Apr 30 12:33:34.403496 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 30 12:33:34.403503 kernel: DMA32 empty Apr 30 12:33:34.403510 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 12:33:34.403522 kernel: Movable zone start for each node Apr 30 12:33:34.403529 kernel: Early memory node ranges Apr 30 12:33:34.403536 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 30 12:33:34.403543 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Apr 30 12:33:34.403550 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Apr 30 12:33:34.403559 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Apr 30 12:33:34.403567 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 30 12:33:34.403574 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 30 12:33:34.403581 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 30 12:33:34.403588 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 30 12:33:34.403596 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 12:33:34.403603 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 30 12:33:34.403611 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 30 12:33:34.403618 kernel: psci: probing for conduit method from ACPI. Apr 30 12:33:34.403625 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 12:33:34.403632 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 12:33:34.403638 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 30 12:33:34.403647 kernel: psci: SMC Calling Convention v1.4 Apr 30 12:33:34.403654 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 30 12:33:34.403661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 30 12:33:34.403669 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 12:33:34.403676 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 12:33:34.403684 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 12:33:34.403690 kernel: Detected PIPT I-cache on CPU0 Apr 30 12:33:34.403698 kernel: CPU features: detected: GIC system register CPU interface Apr 30 12:33:34.403705 kernel: CPU features: detected: Hardware dirty bit management Apr 30 12:33:34.403712 kernel: CPU features: detected: Spectre-BHB Apr 30 12:33:34.403719 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 12:33:34.403728 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 12:33:34.403736 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 12:33:34.403743 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 30 12:33:34.403750 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 12:33:34.403757 kernel: alternatives: applying boot alternatives Apr 30 12:33:34.403765 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:33:34.403773 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:33:34.403780 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:33:34.403787 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:33:34.403794 kernel: Fallback order for Node 0: 0 Apr 30 12:33:34.403802 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 30 12:33:34.403811 kernel: Policy zone: Normal Apr 30 12:33:34.403819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:33:34.403825 kernel: software IO TLB: area num 2. Apr 30 12:33:34.403833 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Apr 30 12:33:34.403840 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) Apr 30 12:33:34.403847 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:33:34.403854 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:33:34.403863 kernel: rcu: RCU event tracing is enabled. Apr 30 12:33:34.403870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:33:34.403877 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:33:34.403885 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:33:34.403893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:33:34.403900 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:33:34.403907 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 12:33:34.403914 kernel: GICv3: 960 SPIs implemented Apr 30 12:33:34.403921 kernel: GICv3: 0 Extended SPIs implemented Apr 30 12:33:34.403928 kernel: Root IRQ handler: gic_handle_irq Apr 30 12:33:34.403935 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 12:33:34.403942 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 30 12:33:34.403949 kernel: ITS: No ITS available, not enabling LPIs Apr 30 12:33:34.403956 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:33:34.403964 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:33:34.403971 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 12:33:34.403980 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 12:33:34.403987 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 12:33:34.403994 kernel: Console: colour dummy device 80x25 Apr 30 12:33:34.404002 kernel: printk: console [tty1] enabled Apr 30 12:33:34.404009 kernel: ACPI: Core revision 20230628 Apr 30 12:33:34.404016 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 12:33:34.404024 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:33:34.404031 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:33:34.404039 kernel: landlock: Up and running. Apr 30 12:33:34.404048 kernel: SELinux: Initializing. Apr 30 12:33:34.404055 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:33:34.404063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:33:34.404070 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:33:34.404078 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:33:34.404086 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 30 12:33:34.404094 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Apr 30 12:33:34.404114 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 12:33:34.404122 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:33:34.404130 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:33:34.404139 kernel: Remapping and enabling EFI services. Apr 30 12:33:34.404146 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:33:34.404157 kernel: Detected PIPT I-cache on CPU1 Apr 30 12:33:34.404164 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 30 12:33:34.404172 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:33:34.404180 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 12:33:34.404188 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:33:34.404197 kernel: SMP: Total of 2 processors activated. Apr 30 12:33:34.404206 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 12:33:34.404213 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 30 12:33:34.404233 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 12:33:34.404241 kernel: CPU features: detected: CRC32 instructions Apr 30 12:33:34.404249 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 12:33:34.404257 kernel: CPU features: detected: LSE atomic instructions Apr 30 12:33:34.404264 kernel: CPU features: detected: Privileged Access Never Apr 30 12:33:34.404272 kernel: CPU: All CPU(s) started at EL1 Apr 30 12:33:34.404281 kernel: alternatives: applying system-wide alternatives Apr 30 12:33:34.404289 kernel: devtmpfs: initialized Apr 30 12:33:34.404297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:33:34.404304 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:33:34.404312 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:33:34.404320 kernel: SMBIOS 3.1.0 present. Apr 30 12:33:34.404328 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Apr 30 12:33:34.404335 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:33:34.404343 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 12:33:34.404353 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 12:33:34.404360 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 12:33:34.404368 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:33:34.404376 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 30 12:33:34.404383 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:33:34.404391 kernel: cpuidle: using governor menu Apr 30 12:33:34.404398 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 12:33:34.404406 kernel: ASID allocator initialised with 32768 entries Apr 30 12:33:34.404414 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:33:34.404423 kernel: Serial: AMBA PL011 UART driver Apr 30 12:33:34.404431 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 12:33:34.404438 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 12:33:34.404446 kernel: Modules: 509264 pages in range for PLT usage Apr 30 12:33:34.404453 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:33:34.404461 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:33:34.404469 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 12:33:34.404476 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 12:33:34.404484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:33:34.404494 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:33:34.404501 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 12:33:34.404510 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 12:33:34.404517 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:33:34.404525 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:33:34.404533 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:33:34.404541 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:33:34.404549 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:33:34.404556 kernel: ACPI: Interpreter enabled Apr 30 12:33:34.404567 kernel: ACPI: Using GIC for interrupt routing Apr 30 12:33:34.404575 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 30 12:33:34.404582 kernel: printk: console [ttyAMA0] enabled Apr 30 12:33:34.404590 kernel: printk: bootconsole [pl11] disabled Apr 30 12:33:34.404598 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 30 12:33:34.404605 kernel: iommu: Default domain type: Translated Apr 30 12:33:34.404613 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 12:33:34.404621 kernel: efivars: Registered efivars operations Apr 30 12:33:34.404629 kernel: vgaarb: loaded Apr 30 12:33:34.404638 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 12:33:34.404645 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:33:34.404653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:33:34.404661 kernel: pnp: PnP ACPI init Apr 30 12:33:34.404668 kernel: pnp: PnP ACPI: found 0 devices Apr 30 12:33:34.404676 kernel: NET: Registered PF_INET protocol family Apr 30 12:33:34.404684 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:33:34.404691 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 12:33:34.404699 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:33:34.404709 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:33:34.404716 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 12:33:34.404724 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 12:33:34.404732 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:33:34.404740 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:33:34.404747 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:33:34.404755 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:33:34.404762 kernel: kvm [1]: HYP mode not available Apr 30 12:33:34.404770 kernel: Initialise system trusted keyrings Apr 30 12:33:34.404779 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 12:33:34.404787 kernel: Key type asymmetric registered Apr 30 12:33:34.404794 kernel: Asymmetric key parser 'x509' registered Apr 30 12:33:34.404802 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 12:33:34.404810 kernel: io scheduler mq-deadline registered Apr 30 12:33:34.404817 kernel: io scheduler kyber registered Apr 30 12:33:34.404825 kernel: io scheduler bfq registered Apr 30 12:33:34.404833 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:33:34.404840 kernel: thunder_xcv, ver 1.0 Apr 30 12:33:34.404849 kernel: thunder_bgx, ver 1.0 Apr 30 12:33:34.404857 kernel: nicpf, ver 1.0 Apr 30 12:33:34.404864 kernel: nicvf, ver 1.0 Apr 30 12:33:34.405062 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 12:33:34.405145 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T12:33:33 UTC (1746016413) Apr 30 12:33:34.405156 kernel: efifb: probing for efifb Apr 30 12:33:34.405164 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 12:33:34.405173 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 12:33:34.405183 kernel: efifb: scrolling: redraw Apr 30 12:33:34.405190 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 12:33:34.405198 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 12:33:34.405206 kernel: fb0: EFI VGA frame buffer device Apr 30 12:33:34.405214 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 30 12:33:34.407297 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 12:33:34.407311 kernel: No ACPI PMU IRQ for CPU0 Apr 30 12:33:34.407319 kernel: No ACPI PMU IRQ for CPU1 Apr 30 12:33:34.407328 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 30 12:33:34.407344 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 12:33:34.407352 kernel: watchdog: Hard watchdog permanently disabled Apr 30 12:33:34.407360 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:33:34.407367 kernel: Segment Routing with IPv6 Apr 30 12:33:34.407375 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:33:34.407383 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:33:34.407391 kernel: Key type dns_resolver registered Apr 30 12:33:34.407399 kernel: registered taskstats version 1 Apr 30 12:33:34.407406 kernel: Loading compiled-in X.509 certificates Apr 30 12:33:34.407416 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4e3d8be893bce81adbd52ab54fa98214a1a14a2e' Apr 30 12:33:34.407425 kernel: Key type .fscrypt registered Apr 30 12:33:34.407432 kernel: Key type fscrypt-provisioning registered Apr 30 12:33:34.407440 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:33:34.407448 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:33:34.407455 kernel: ima: No architecture policies found Apr 30 12:33:34.407463 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 12:33:34.407471 kernel: clk: Disabling unused clocks Apr 30 12:33:34.407478 kernel: Freeing unused kernel memory: 38336K Apr 30 12:33:34.407488 kernel: Run /init as init process Apr 30 12:33:34.407496 kernel: with arguments: Apr 30 12:33:34.407504 kernel: /init Apr 30 12:33:34.407511 kernel: with environment: Apr 30 12:33:34.407518 kernel: HOME=/ Apr 30 12:33:34.407526 kernel: TERM=linux Apr 30 12:33:34.407534 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:33:34.407543 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:33:34.407557 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:33:34.407566 systemd[1]: Detected virtualization microsoft. Apr 30 12:33:34.407575 systemd[1]: Detected architecture arm64. Apr 30 12:33:34.407583 systemd[1]: Running in initrd. Apr 30 12:33:34.407591 systemd[1]: No hostname configured, using default hostname. Apr 30 12:33:34.407600 systemd[1]: Hostname set to . Apr 30 12:33:34.407608 systemd[1]: Initializing machine ID from random generator. Apr 30 12:33:34.407616 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:33:34.407626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:33:34.407635 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:33:34.407644 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:33:34.407653 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:33:34.407661 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:33:34.407671 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:33:34.407680 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:33:34.407691 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:33:34.407699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:33:34.407707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:33:34.407716 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:33:34.407724 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:33:34.407732 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:33:34.407741 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:33:34.407749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:33:34.407760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:33:34.407768 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:33:34.407777 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:33:34.407785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:33:34.407794 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:33:34.407802 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:33:34.407810 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:33:34.407818 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:33:34.407827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:33:34.407837 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:33:34.407845 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:33:34.407854 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:33:34.407900 systemd-journald[218]: Collecting audit messages is disabled. Apr 30 12:33:34.407924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:33:34.407933 systemd-journald[218]: Journal started Apr 30 12:33:34.407953 systemd-journald[218]: Runtime Journal (/run/log/journal/dff44b28d1124c4981b68ab858bdf24e) is 8M, max 78.5M, 70.5M free. Apr 30 12:33:34.420250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:33:34.418016 systemd-modules-load[220]: Inserted module 'overlay' Apr 30 12:33:34.455744 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:33:34.455817 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:33:34.466033 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:33:34.490794 kernel: Bridge firewalling registered Apr 30 12:33:34.475356 systemd-modules-load[220]: Inserted module 'br_netfilter' Apr 30 12:33:34.476468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:33:34.484716 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:33:34.496014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:33:34.508506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:33:34.546860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:33:34.556458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:33:34.584093 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:33:34.611652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:33:34.625982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:33:34.650257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:33:34.663868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:33:34.678648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:33:34.707894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:33:34.725055 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:33:34.743470 dracut-cmdline[253]: dracut-dracut-053 Apr 30 12:33:34.750930 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:33:34.748440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:33:34.801982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:33:34.839268 systemd-resolved[259]: Positive Trust Anchors: Apr 30 12:33:34.839286 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:33:34.839317 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:33:34.841757 systemd-resolved[259]: Defaulting to hostname 'linux'. Apr 30 12:33:34.847813 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:33:34.902081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:33:34.977246 kernel: SCSI subsystem initialized Apr 30 12:33:34.987236 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:33:34.996253 kernel: iscsi: registered transport (tcp) Apr 30 12:33:35.014813 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:33:35.014842 kernel: QLogic iSCSI HBA Driver Apr 30 12:33:35.052800 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:33:35.071777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:33:35.104893 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:33:35.104935 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:33:35.112271 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:33:35.164283 kernel: raid6: neonx8 gen() 15666 MB/s Apr 30 12:33:35.184249 kernel: raid6: neonx4 gen() 15817 MB/s Apr 30 12:33:35.204245 kernel: raid6: neonx2 gen() 13196 MB/s Apr 30 12:33:35.225258 kernel: raid6: neonx1 gen() 10538 MB/s Apr 30 12:33:35.245245 kernel: raid6: int64x8 gen() 6789 MB/s Apr 30 12:33:35.266240 kernel: raid6: int64x4 gen() 7356 MB/s Apr 30 12:33:35.288251 kernel: raid6: int64x2 gen() 6109 MB/s Apr 30 12:33:35.314659 kernel: raid6: int64x1 gen() 5058 MB/s Apr 30 12:33:35.314727 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Apr 30 12:33:35.342026 kernel: raid6: .... xor() 12353 MB/s, rmw enabled Apr 30 12:33:35.342043 kernel: raid6: using neon recovery algorithm Apr 30 12:33:35.355142 kernel: xor: measuring software checksum speed Apr 30 12:33:35.355168 kernel: 8regs : 21522 MB/sec Apr 30 12:33:35.358963 kernel: 32regs : 21601 MB/sec Apr 30 12:33:35.362720 kernel: arm64_neon : 27691 MB/sec Apr 30 12:33:35.368741 kernel: xor: using function: arm64_neon (27691 MB/sec) Apr 30 12:33:35.423287 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:33:35.436289 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:33:35.454398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:33:35.481820 systemd-udevd[441]: Using default interface naming scheme 'v255'. Apr 30 12:33:35.487623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:33:35.514387 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:33:35.533889 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Apr 30 12:33:35.570186 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:33:35.587515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:33:35.622698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:33:35.646865 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:33:35.672373 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:33:35.692261 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:33:35.704698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:33:35.726720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:33:35.757533 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:33:35.785035 kernel: hv_vmbus: Vmbus version:5.3 Apr 30 12:33:35.774436 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:33:35.802585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:33:35.824276 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 12:33:35.824317 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 12:33:35.824328 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 12:33:35.824338 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 12:33:35.808104 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:33:35.847619 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 30 12:33:35.847640 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 12:33:35.879272 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 12:33:35.879522 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 30 12:33:35.883448 kernel: PTP clock support registered Apr 30 12:33:35.885655 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:33:35.935659 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 12:33:35.935684 kernel: hv_vmbus: registering driver hv_utils Apr 30 12:33:35.935695 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 12:33:35.935716 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 12:33:35.935725 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 12:33:35.935736 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 12:33:35.908303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:33:35.946748 kernel: scsi host0: storvsc_host_t Apr 30 12:33:35.946798 kernel: scsi host1: storvsc_host_t Apr 30 12:33:35.908562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:33:35.986550 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 12:33:35.986591 kernel: hv_netvsc 000d3afc-2080-000d-3afc-2080000d3afc eth0: VF slot 1 added Apr 30 12:33:35.943292 systemd-resolved[259]: Clock change detected. Flushing caches. Apr 30 12:33:36.001110 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 12:33:35.980806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:33:36.007956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:33:36.038088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:33:36.045587 kernel: hv_vmbus: registering driver hv_pci Apr 30 12:33:36.038276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:33:36.080360 kernel: hv_pci e9364a29-bdbb-46a8-b8de-c4221cd74af5: PCI VMBus probing: Using version 0x10004 Apr 30 12:33:36.186563 kernel: hv_pci e9364a29-bdbb-46a8-b8de-c4221cd74af5: PCI host bridge to bus bdbb:00 Apr 30 12:33:36.186695 kernel: pci_bus bdbb:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 30 12:33:36.186805 kernel: pci_bus bdbb:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 12:33:36.186885 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 12:33:36.186988 kernel: pci bdbb:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 30 12:33:36.187085 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:33:36.187096 kernel: pci bdbb:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 12:33:36.187186 kernel: pci bdbb:00:02.0: enabling Extended Tags Apr 30 12:33:36.187270 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 12:33:36.187357 kernel: pci bdbb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bdbb:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 30 12:33:36.187460 kernel: pci_bus bdbb:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 12:33:36.187543 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 12:33:36.225526 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 12:33:36.225667 kernel: pci bdbb:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 12:33:36.225794 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 12:33:36.225906 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 12:33:36.226003 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 12:33:36.226096 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:33:36.226111 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 12:33:36.070380 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:33:36.128153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:33:36.197852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:33:36.232842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:33:36.289639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:33:36.320176 kernel: mlx5_core bdbb:00:02.0: enabling device (0000 -> 0002) Apr 30 12:33:36.549958 kernel: mlx5_core bdbb:00:02.0: firmware version: 16.30.1284 Apr 30 12:33:36.550123 kernel: hv_netvsc 000d3afc-2080-000d-3afc-2080000d3afc eth0: VF registering: eth1 Apr 30 12:33:36.550227 kernel: mlx5_core bdbb:00:02.0 eth1: joined to eth0 Apr 30 12:33:36.550325 kernel: mlx5_core bdbb:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 30 12:33:36.557462 kernel: mlx5_core bdbb:00:02.0 enP48571s1: renamed from eth1 Apr 30 12:33:36.808890 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 12:33:36.840527 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (498) Apr 30 12:33:36.860471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 12:33:36.919917 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 12:33:36.937638 kernel: BTRFS: device fsid 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (485) Apr 30 12:33:36.951568 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 12:33:36.959952 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 12:33:36.994632 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:33:37.022472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:33:37.032451 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:33:38.042534 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:33:38.042599 disk-uuid[609]: The operation has completed successfully. Apr 30 12:33:38.108503 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:33:38.108618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:33:38.164584 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:33:38.180265 sh[695]: Success Apr 30 12:33:38.211625 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 12:33:38.413409 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:33:38.437021 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:33:38.449463 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:33:38.484134 kernel: BTRFS info (device dm-0): first mount of filesystem 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 Apr 30 12:33:38.484192 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:33:38.491533 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:33:38.496826 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:33:38.501388 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:33:38.819016 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:33:38.825640 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:33:38.846689 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:33:38.861666 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:33:38.896873 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:33:38.896898 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:33:38.896908 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:33:38.920458 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:33:38.933523 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:33:38.938966 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:33:38.955695 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:33:39.013071 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:33:39.035590 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:33:39.067421 systemd-networkd[876]: lo: Link UP Apr 30 12:33:39.067441 systemd-networkd[876]: lo: Gained carrier Apr 30 12:33:39.070159 systemd-networkd[876]: Enumeration completed Apr 30 12:33:39.070348 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:33:39.080579 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:33:39.080583 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:33:39.081387 systemd[1]: Reached target network.target - Network. Apr 30 12:33:39.147452 kernel: mlx5_core bdbb:00:02.0 enP48571s1: Link up Apr 30 12:33:39.189452 kernel: hv_netvsc 000d3afc-2080-000d-3afc-2080000d3afc eth0: Data path switched to VF: enP48571s1 Apr 30 12:33:39.190659 systemd-networkd[876]: enP48571s1: Link UP Apr 30 12:33:39.190873 systemd-networkd[876]: eth0: Link UP Apr 30 12:33:39.191243 systemd-networkd[876]: eth0: Gained carrier Apr 30 12:33:39.191253 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:33:39.203822 systemd-networkd[876]: enP48571s1: Gained carrier Apr 30 12:33:39.224500 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 12:33:39.735168 ignition[798]: Ignition 2.20.0 Apr 30 12:33:39.738674 ignition[798]: Stage: fetch-offline Apr 30 12:33:39.738732 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:33:39.743202 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:33:39.738741 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:33:39.738866 ignition[798]: parsed url from cmdline: "" Apr 30 12:33:39.738869 ignition[798]: no config URL provided Apr 30 12:33:39.738874 ignition[798]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:33:39.738882 ignition[798]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:33:39.774802 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:33:39.738888 ignition[798]: failed to fetch config: resource requires networking Apr 30 12:33:39.739097 ignition[798]: Ignition finished successfully Apr 30 12:33:39.798752 ignition[886]: Ignition 2.20.0 Apr 30 12:33:39.798773 ignition[886]: Stage: fetch Apr 30 12:33:39.799569 ignition[886]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:33:39.799582 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:33:39.800707 ignition[886]: parsed url from cmdline: "" Apr 30 12:33:39.800727 ignition[886]: no config URL provided Apr 30 12:33:39.800748 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:33:39.800827 ignition[886]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:33:39.803476 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 12:33:39.901242 ignition[886]: GET result: OK Apr 30 12:33:39.901364 ignition[886]: config has been read from IMDS userdata Apr 30 12:33:39.901405 ignition[886]: parsing config with SHA512: 5e424295525f675811415dbc0af6fed25282d1cb7a5d129de141f75f57886da5a054e5b17da0e13e823d9d2b506db191c4f8765985432d9697b0e28ef0984c06 Apr 30 12:33:39.906846 unknown[886]: fetched base config from "system" Apr 30 12:33:39.907348 ignition[886]: fetch: fetch complete Apr 30 12:33:39.906856 unknown[886]: fetched base config from "system" Apr 30 12:33:39.907353 ignition[886]: fetch: fetch passed Apr 30 12:33:39.906861 unknown[886]: fetched user config from "azure" Apr 30 12:33:39.907402 ignition[886]: Ignition finished successfully Apr 30 12:33:39.910015 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:33:39.936701 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:33:39.966483 ignition[892]: Ignition 2.20.0 Apr 30 12:33:39.966498 ignition[892]: Stage: kargs Apr 30 12:33:39.972792 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:33:39.966715 ignition[892]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:33:39.966727 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:33:39.967930 ignition[892]: kargs: kargs passed Apr 30 12:33:39.967994 ignition[892]: Ignition finished successfully Apr 30 12:33:39.998773 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:33:40.025294 ignition[898]: Ignition 2.20.0 Apr 30 12:33:40.025307 ignition[898]: Stage: disks Apr 30 12:33:40.032577 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:33:40.025510 ignition[898]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:33:40.039751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:33:40.025521 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:33:40.052183 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:33:40.026537 ignition[898]: disks: disks passed Apr 30 12:33:40.066600 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:33:40.026589 ignition[898]: Ignition finished successfully Apr 30 12:33:40.078969 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:33:40.091483 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:33:40.125725 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:33:40.225161 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 12:33:40.234489 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:33:40.259647 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:33:40.310447 kernel: EXT4-fs (sda9): mounted filesystem 597557b0-8ae6-4a5a-8e98-f3f884fcfe65 r/w with ordered data mode. Quota mode: none. Apr 30 12:33:40.311518 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:33:40.322770 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:33:40.362675 systemd-networkd[876]: enP48571s1: Gained IPv6LL Apr 30 12:33:40.376613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:33:40.388409 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:33:40.398691 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 12:33:40.416214 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:33:40.416267 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:33:40.465944 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (917) Apr 30 12:33:40.465973 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:33:40.451749 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:33:40.478544 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:33:40.490760 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:33:40.492750 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:33:40.510108 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:33:40.508657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:33:40.554649 systemd-networkd[876]: eth0: Gained IPv6LL Apr 30 12:33:41.007247 coreos-metadata[919]: Apr 30 12:33:41.007 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 12:33:41.131064 coreos-metadata[919]: Apr 30 12:33:41.130 INFO Fetch successful Apr 30 12:33:41.131064 coreos-metadata[919]: Apr 30 12:33:41.130 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 12:33:41.151141 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:33:41.187701 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:33:41.197970 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:33:41.223039 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:33:41.272077 coreos-metadata[919]: Apr 30 12:33:41.271 INFO Fetch successful Apr 30 12:33:41.280811 coreos-metadata[919]: Apr 30 12:33:41.278 INFO wrote hostname ci-4230.1.1-a-9a970e7770 to /sysroot/etc/hostname Apr 30 12:33:41.281618 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:33:42.122267 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:33:42.140684 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:33:42.149261 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:33:42.170874 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:33:42.184448 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:33:42.206478 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:33:42.217545 ignition[1042]: INFO : Ignition 2.20.0 Apr 30 12:33:42.217545 ignition[1042]: INFO : Stage: mount Apr 30 12:33:42.217545 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:33:42.217545 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:33:42.217545 ignition[1042]: INFO : mount: mount passed Apr 30 12:33:42.217545 ignition[1042]: INFO : Ignition finished successfully Apr 30 12:33:42.223821 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:33:42.248676 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:33:42.282664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:33:42.313001 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1053) Apr 30 12:33:42.313068 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:33:42.320160 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:33:42.324990 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:33:42.332444 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:33:42.333895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:33:42.364421 ignition[1071]: INFO : Ignition 2.20.0 Apr 30 12:33:42.371406 ignition[1071]: INFO : Stage: files Apr 30 12:33:42.371406 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:33:42.371406 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:33:42.371406 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:33:42.404208 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:33:42.404208 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:33:42.492293 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:33:42.500713 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:33:42.500713 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:33:42.492834 unknown[1071]: wrote ssh authorized keys file for user: core Apr 30 12:33:42.524977 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:33:42.524977 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 12:33:42.787319 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:33:43.024737 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:33:43.024737 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:33:43.056638 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 12:33:43.472816 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:33:43.546623 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:33:43.546623 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:33:43.572724 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 12:33:43.944529 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:33:44.364387 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:33:44.364387 ignition[1071]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:33:44.393405 ignition[1071]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:33:44.405781 ignition[1071]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:33:44.405781 ignition[1071]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:33:44.405781 ignition[1071]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:33:44.405781 ignition[1071]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:33:44.405781 ignition[1071]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:33:44.405781 ignition[1071]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:33:44.405781 ignition[1071]: INFO : files: files passed Apr 30 12:33:44.405781 ignition[1071]: INFO : Ignition finished successfully Apr 30 12:33:44.423973 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:33:44.457447 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:33:44.474726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:33:44.496003 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:33:44.539931 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:33:44.539931 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:33:44.496457 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:33:44.572903 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:33:44.550600 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:33:44.566768 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:33:44.597665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:33:44.640381 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:33:44.640538 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:33:44.657099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:33:44.673087 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:33:44.688124 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:33:44.707747 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:33:44.734294 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:33:44.755743 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:33:44.784513 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:33:44.784631 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:33:44.798196 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:33:44.811870 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:33:44.825437 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:33:44.837711 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:33:44.837804 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:33:44.855541 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:33:44.868463 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:33:44.879474 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:33:44.890971 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:33:44.903811 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:33:44.917233 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:33:44.929552 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:33:44.942588 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:33:44.955564 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:33:44.967622 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:33:44.978174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:33:44.978258 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:33:44.994340 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:33:45.001068 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:33:45.015887 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:33:45.015934 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:33:45.031015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:33:45.031102 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:33:45.050921 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:33:45.050981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:33:45.068780 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:33:45.068848 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:33:45.082134 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 12:33:45.144982 ignition[1124]: INFO : Ignition 2.20.0 Apr 30 12:33:45.144982 ignition[1124]: INFO : Stage: umount Apr 30 12:33:45.144982 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:33:45.144982 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:33:45.144982 ignition[1124]: INFO : umount: umount passed Apr 30 12:33:45.144982 ignition[1124]: INFO : Ignition finished successfully Apr 30 12:33:45.082196 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:33:45.111640 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:33:45.129469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:33:45.129561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:33:45.144656 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:33:45.152777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:33:45.152861 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:33:45.163896 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:33:45.163948 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:33:45.195756 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:33:45.195921 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:33:45.210890 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:33:45.211049 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:33:45.233197 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:33:45.233320 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:33:45.256196 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:33:45.256311 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:33:45.272458 systemd[1]: Stopped target network.target - Network. Apr 30 12:33:45.289708 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:33:45.289829 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:33:45.298543 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:33:45.313984 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:33:45.317467 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:33:45.334155 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:33:45.345874 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:33:45.361204 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:33:45.361287 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:33:45.375727 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:33:45.375789 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:33:45.394417 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:33:45.394522 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:33:45.406898 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:33:45.406972 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:33:45.421034 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:33:45.433961 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:33:45.455780 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:33:45.456485 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:33:45.456621 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:33:45.688258 kernel: hv_netvsc 000d3afc-2080-000d-3afc-2080000d3afc eth0: Data path switched from VF: enP48571s1 Apr 30 12:33:45.477497 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:33:45.477906 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:33:45.478163 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:33:45.490867 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:33:45.491886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:33:45.491957 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:33:45.510703 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:33:45.526548 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:33:45.526635 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:33:45.548544 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:33:45.548615 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:33:45.566840 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:33:45.566908 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:33:45.575091 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:33:45.575168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:33:45.593056 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:33:45.604151 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:33:45.604238 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:33:45.650331 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:33:45.650527 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:33:45.662589 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:33:45.662645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:33:45.681981 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:33:45.682022 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:33:45.695119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:33:45.695193 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:33:45.714536 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:33:45.714604 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:33:45.736838 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:33:45.736952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:33:45.772701 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:33:45.791734 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:33:45.791832 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:33:45.813570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:33:45.813693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:33:45.832204 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:33:45.832292 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:33:45.833127 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:33:46.049597 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Apr 30 12:33:45.833240 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:33:45.842183 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:33:45.842288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:33:45.853950 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:33:45.854043 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:33:45.868674 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:33:45.881795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:33:45.881908 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:33:45.917780 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:33:45.954961 systemd[1]: Switching root. Apr 30 12:33:46.106692 systemd-journald[218]: Journal stopped Apr 30 12:33:50.082898 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:33:50.082930 kernel: SELinux: policy capability open_perms=1 Apr 30 12:33:50.082941 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:33:50.082949 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:33:50.082959 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:33:50.082967 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:33:50.082975 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:33:50.082983 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:33:50.082991 kernel: audit: type=1403 audit(1746016426.995:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:33:50.083001 systemd[1]: Successfully loaded SELinux policy in 140.838ms. Apr 30 12:33:50.083013 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.200ms. Apr 30 12:33:50.083023 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:33:50.083032 systemd[1]: Detected virtualization microsoft. Apr 30 12:33:50.083040 systemd[1]: Detected architecture arm64. Apr 30 12:33:50.083049 systemd[1]: Detected first boot. Apr 30 12:33:50.083059 systemd[1]: Hostname set to . Apr 30 12:33:50.083067 systemd[1]: Initializing machine ID from random generator. Apr 30 12:33:50.083076 zram_generator::config[1167]: No configuration found. Apr 30 12:33:50.083085 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:33:50.083093 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:33:50.083102 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:33:50.083111 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:33:50.083121 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:33:50.083130 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:33:50.083138 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:33:50.083148 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:33:50.083156 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:33:50.083165 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:33:50.083176 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:33:50.083187 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:33:50.083196 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:33:50.083205 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:33:50.083214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:33:50.083223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:33:50.083232 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:33:50.083240 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:33:50.083249 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:33:50.083260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:33:50.083269 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 12:33:50.083278 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:33:50.083289 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:33:50.083298 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:33:50.083308 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:33:50.083317 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:33:50.083326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:33:50.083337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:33:50.083346 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:33:50.083355 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:33:50.083364 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:33:50.083375 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:33:50.083384 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:33:50.083395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:33:50.083405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:33:50.083414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:33:50.083423 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:33:50.083485 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:33:50.083495 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:33:50.083504 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:33:50.083516 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:33:50.083526 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:33:50.083534 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:33:50.083544 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:33:50.083553 systemd[1]: Reached target machines.target - Containers. Apr 30 12:33:50.083563 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:33:50.083572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:33:50.083581 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:33:50.083592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:33:50.083601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:33:50.083610 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:33:50.083619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:33:50.083628 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:33:50.083637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:33:50.083647 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:33:50.083656 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:33:50.083667 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:33:50.083676 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:33:50.083685 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:33:50.083695 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:33:50.083704 kernel: fuse: init (API version 7.39) Apr 30 12:33:50.083712 kernel: loop: module loaded Apr 30 12:33:50.083721 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:33:50.083729 kernel: ACPI: bus type drm_connector registered Apr 30 12:33:50.083738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:33:50.083749 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:33:50.083784 systemd-journald[1271]: Collecting audit messages is disabled. Apr 30 12:33:50.083805 systemd-journald[1271]: Journal started Apr 30 12:33:50.083828 systemd-journald[1271]: Runtime Journal (/run/log/journal/59f7440806b34ace80c81dabe8cd1e14) is 8M, max 78.5M, 70.5M free. Apr 30 12:33:49.062499 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:33:49.067346 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 12:33:49.067765 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:33:49.068109 systemd[1]: systemd-journald.service: Consumed 3.722s CPU time. Apr 30 12:33:50.100129 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:33:50.117224 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:33:50.131946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:33:50.142449 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:33:50.142514 systemd[1]: Stopped verity-setup.service. Apr 30 12:33:50.160815 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:33:50.161778 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:33:50.168032 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:33:50.175317 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:33:50.182098 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:33:50.189338 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:33:50.196361 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:33:50.202687 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:33:50.211052 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:33:50.219152 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:33:50.219322 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:33:50.226802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:33:50.226983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:33:50.234576 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:33:50.234803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:33:50.242465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:33:50.242687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:33:50.250827 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:33:50.251034 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:33:50.258374 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:33:50.258555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:33:50.265316 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:33:50.273334 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:33:50.282901 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:33:50.291806 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:33:50.300508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:33:50.321191 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:33:50.340654 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:33:50.349176 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:33:50.356157 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:33:50.356204 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:33:50.363232 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:33:50.371921 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:33:50.380488 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:33:50.386818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:33:50.426679 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:33:50.436050 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:33:50.443260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:33:50.444593 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:33:50.450947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:33:50.452344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:33:50.462676 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:33:50.482998 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:33:50.491814 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:33:50.498349 systemd-journald[1271]: Time spent on flushing to /var/log/journal/59f7440806b34ace80c81dabe8cd1e14 is 25.981ms for 915 entries. Apr 30 12:33:50.498349 systemd-journald[1271]: System Journal (/var/log/journal/59f7440806b34ace80c81dabe8cd1e14) is 8M, max 2.6G, 2.6G free. Apr 30 12:33:50.554619 systemd-journald[1271]: Received client request to flush runtime journal. Apr 30 12:33:50.554672 kernel: loop0: detected capacity change from 0 to 123192 Apr 30 12:33:50.509030 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:33:50.516846 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:33:50.524668 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:33:50.532797 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:33:50.546643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:33:50.562236 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:33:50.570560 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:33:50.579490 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:33:50.588174 udevadm[1310]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 12:33:50.613204 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:33:50.631640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:33:50.661977 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:33:50.663413 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:33:50.723529 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Apr 30 12:33:50.723983 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Apr 30 12:33:50.728801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:33:50.880454 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:33:50.911465 kernel: loop1: detected capacity change from 0 to 194096 Apr 30 12:33:50.972460 kernel: loop2: detected capacity change from 0 to 113512 Apr 30 12:33:51.284470 kernel: loop3: detected capacity change from 0 to 28720 Apr 30 12:33:51.638641 kernel: loop4: detected capacity change from 0 to 123192 Apr 30 12:33:51.648490 kernel: loop5: detected capacity change from 0 to 194096 Apr 30 12:33:51.664459 kernel: loop6: detected capacity change from 0 to 113512 Apr 30 12:33:51.674449 kernel: loop7: detected capacity change from 0 to 28720 Apr 30 12:33:51.678122 (sd-merge)[1331]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 12:33:51.678620 (sd-merge)[1331]: Merged extensions into '/usr'. Apr 30 12:33:51.683256 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:33:51.683279 systemd[1]: Reloading... Apr 30 12:33:51.766473 zram_generator::config[1358]: No configuration found. Apr 30 12:33:51.917766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:33:51.992016 systemd[1]: Reloading finished in 308 ms. Apr 30 12:33:52.016327 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:33:52.026459 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:33:52.044869 systemd[1]: Starting ensure-sysext.service... Apr 30 12:33:52.052644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:33:52.062807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:33:52.097153 systemd[1]: Reload requested from client PID 1415 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:33:52.097174 systemd[1]: Reloading... Apr 30 12:33:52.097775 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:33:52.098267 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:33:52.099005 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:33:52.099212 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Apr 30 12:33:52.099256 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Apr 30 12:33:52.102967 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:33:52.103141 systemd-tmpfiles[1416]: Skipping /boot Apr 30 12:33:52.114558 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:33:52.114773 systemd-tmpfiles[1416]: Skipping /boot Apr 30 12:33:52.116366 systemd-udevd[1417]: Using default interface naming scheme 'v255'. Apr 30 12:33:52.197418 zram_generator::config[1453]: No configuration found. Apr 30 12:33:52.308211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:33:52.380849 systemd[1]: Reloading finished in 283 ms. Apr 30 12:33:52.394418 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:33:52.429651 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:33:52.461854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:33:52.514912 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:33:52.525845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:33:52.529152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:33:52.545218 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:33:52.564311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:33:52.580271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:33:52.580492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:33:52.586032 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:33:52.603701 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:33:52.612959 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:33:52.619850 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:33:52.628992 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:33:52.641037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:33:52.641287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:33:52.652598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:33:52.654888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:33:52.665675 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:33:52.665931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:33:52.684068 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 12:33:52.685637 systemd[1]: Finished ensure-sysext.service. Apr 30 12:33:52.695712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:33:52.707880 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Apr 30 12:33:52.717422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:33:52.718872 augenrules[1569]: No rules Apr 30 12:33:52.726496 kernel: hv_vmbus: registering driver hv_balloon Apr 30 12:33:52.726896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:33:52.746678 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 12:33:52.746775 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 30 12:33:52.748579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:33:52.763682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:33:52.784763 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 12:33:52.784848 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 12:33:52.784866 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 12:33:52.798639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:33:52.806890 kernel: Console: switching to colour dummy device 80x25 Apr 30 12:33:52.815896 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 12:33:52.830038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:33:52.830303 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:33:52.830376 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:33:52.853647 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:33:52.862612 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:33:52.862868 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:33:52.869217 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:33:52.878153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:33:52.879484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:33:52.888152 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:33:52.888336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:33:52.896951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:33:52.897164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:33:52.905076 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:33:52.905261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:33:52.913379 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Apr 30 12:33:52.929249 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:33:52.941676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:33:52.941814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:33:52.952186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:33:53.049531 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1524) Apr 30 12:33:53.109627 systemd-resolved[1558]: Positive Trust Anchors: Apr 30 12:33:53.109685 systemd-resolved[1558]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:33:53.109722 systemd-resolved[1558]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:33:53.129176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 12:33:53.141579 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:33:53.151845 systemd-resolved[1558]: Using system hostname 'ci-4230.1.1-a-9a970e7770'. Apr 30 12:33:53.154326 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:33:53.160960 systemd-networkd[1556]: lo: Link UP Apr 30 12:33:53.161481 systemd-networkd[1556]: lo: Gained carrier Apr 30 12:33:53.164416 systemd-networkd[1556]: Enumeration completed Apr 30 12:33:53.165148 systemd-networkd[1556]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:33:53.165264 systemd-networkd[1556]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:33:53.165858 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:33:53.174645 systemd[1]: Reached target network.target - Network. Apr 30 12:33:53.181081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:33:53.196676 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:33:53.205680 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:33:53.215695 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:33:53.230709 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:33:53.272459 kernel: mlx5_core bdbb:00:02.0 enP48571s1: Link up Apr 30 12:33:53.279218 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:33:53.302715 kernel: hv_netvsc 000d3afc-2080-000d-3afc-2080000d3afc eth0: Data path switched to VF: enP48571s1 Apr 30 12:33:53.304214 systemd-networkd[1556]: enP48571s1: Link UP Apr 30 12:33:53.304324 systemd-networkd[1556]: eth0: Link UP Apr 30 12:33:53.304327 systemd-networkd[1556]: eth0: Gained carrier Apr 30 12:33:53.304344 systemd-networkd[1556]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:33:53.306173 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:33:53.315135 lvm[1663]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:33:53.318194 systemd-networkd[1556]: enP48571s1: Gained carrier Apr 30 12:33:53.325511 systemd-networkd[1556]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 12:33:53.328134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:33:53.336207 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:33:53.344868 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:33:53.359638 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:33:53.368128 lvm[1674]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:33:53.390599 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:33:53.509571 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:33:53.517499 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:33:54.378634 systemd-networkd[1556]: eth0: Gained IPv6LL Apr 30 12:33:54.381374 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:33:54.389991 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:33:54.634605 systemd-networkd[1556]: enP48571s1: Gained IPv6LL Apr 30 12:33:56.123039 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:33:56.135123 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:33:56.147678 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:33:56.156755 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:33:56.163546 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:33:56.170320 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:33:56.177450 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:33:56.185122 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:33:56.191351 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:33:56.198747 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:33:56.206127 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:33:56.206167 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:33:56.211625 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:33:56.235900 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:33:56.244231 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:33:56.252472 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:33:56.260124 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:33:56.267624 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:33:56.281361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:33:56.287967 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:33:56.296406 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:33:56.304115 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:33:56.310302 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:33:56.315768 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:33:56.315795 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:33:56.324568 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 12:33:56.333677 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:33:56.346833 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:33:56.355754 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:33:56.367621 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:33:56.375900 (chronyd)[1683]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 12:33:56.377694 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:33:56.385752 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:33:56.385797 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 12:33:56.387854 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 12:33:56.396761 KVP[1692]: KVP starting; pid is:1692 Apr 30 12:33:56.397100 jq[1690]: false Apr 30 12:33:56.403101 KVP[1692]: KVP LIC Version: 3.1 Apr 30 12:33:56.403463 kernel: hv_utils: KVP IC version 4.0 Apr 30 12:33:56.405088 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 12:33:56.407683 chronyd[1694]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 12:33:56.413706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:33:56.425341 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:33:56.436528 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:33:56.443166 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Apr 30 12:33:56.443375 chronyd[1694]: Loaded seccomp filter (level 2) Apr 30 12:33:56.445718 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:33:56.454720 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:33:56.474134 extend-filesystems[1691]: Found loop4 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found loop5 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found loop6 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found loop7 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda1 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda2 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda3 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found usr Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda4 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda6 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda7 Apr 30 12:33:56.488735 extend-filesystems[1691]: Found sda9 Apr 30 12:33:56.488735 extend-filesystems[1691]: Checking size of /dev/sda9 Apr 30 12:33:56.474658 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:33:56.648726 extend-filesystems[1691]: Old size kept for /dev/sda9 Apr 30 12:33:56.648726 extend-filesystems[1691]: Found sr0 Apr 30 12:33:56.552241 dbus-daemon[1686]: [system] SELinux support is enabled Apr 30 12:33:56.499673 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:33:56.510882 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:33:56.511500 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:33:56.517741 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:33:56.685058 update_engine[1714]: I20250430 12:33:56.598981 1714 main.cc:92] Flatcar Update Engine starting Apr 30 12:33:56.685058 update_engine[1714]: I20250430 12:33:56.605445 1714 update_check_scheduler.cc:74] Next update check in 11m36s Apr 30 12:33:56.538724 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:33:56.685388 jq[1716]: true Apr 30 12:33:56.567809 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:33:56.577126 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 12:33:56.595131 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:33:56.595327 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:33:56.595613 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:33:56.595771 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:33:56.633871 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:33:56.634061 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:33:56.654184 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:33:56.672983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:33:56.673239 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:33:56.689139 systemd-logind[1709]: New seat seat0. Apr 30 12:33:56.691226 systemd-logind[1709]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 30 12:33:56.694809 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:33:56.708094 coreos-metadata[1685]: Apr 30 12:33:56.707 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 12:33:56.726662 jq[1735]: true Apr 30 12:33:56.738636 coreos-metadata[1685]: Apr 30 12:33:56.717 INFO Fetch successful Apr 30 12:33:56.738636 coreos-metadata[1685]: Apr 30 12:33:56.718 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 12:33:56.738636 coreos-metadata[1685]: Apr 30 12:33:56.727 INFO Fetch successful Apr 30 12:33:56.738636 coreos-metadata[1685]: Apr 30 12:33:56.727 INFO Fetching http://168.63.129.16/machine/2a252d39-c5a6-446f-aaff-7a581c33856e/d4089e0f%2D23a6%2D4397%2D80c9%2D3fffdb31c9cb.%5Fci%2D4230.1.1%2Da%2D9a970e7770?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 12:33:56.738636 coreos-metadata[1685]: Apr 30 12:33:56.730 INFO Fetch successful Apr 30 12:33:56.738636 coreos-metadata[1685]: Apr 30 12:33:56.730 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 12:33:56.727324 (ntainerd)[1740]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:33:56.740247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:33:56.741606 dbus-daemon[1686]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 12:33:56.740288 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:33:56.751036 coreos-metadata[1685]: Apr 30 12:33:56.750 INFO Fetch successful Apr 30 12:33:56.752361 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:33:56.752389 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:33:56.793226 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:33:56.800626 tar[1731]: linux-arm64/helm Apr 30 12:33:56.808834 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:33:56.848797 bash[1775]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:33:56.851488 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:33:56.868376 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:33:56.887388 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:33:56.887738 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 12:33:57.012495 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1755) Apr 30 12:33:57.237363 locksmithd[1779]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:33:57.338071 sshd_keygen[1713]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:33:57.406565 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:33:57.424481 containerd[1740]: time="2025-04-30T12:33:57.424341020Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:33:57.432401 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:33:57.447709 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 12:33:57.458466 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:33:57.458738 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:33:57.472312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:33:57.499837 containerd[1740]: time="2025-04-30T12:33:57.499789820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.501814260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.501857540Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.501876940Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502040100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502069380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502136220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502149780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502346860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502361060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502375460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502469 containerd[1740]: time="2025-04-30T12:33:57.502384460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:33:57.502937 containerd[1740]: time="2025-04-30T12:33:57.502914700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:33:57.503272 containerd[1740]: time="2025-04-30T12:33:57.503250780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:33:57.503858 containerd[1740]: time="2025-04-30T12:33:57.503590100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:33:57.503858 containerd[1740]: time="2025-04-30T12:33:57.503610620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:33:57.503858 containerd[1740]: time="2025-04-30T12:33:57.503701100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:33:57.503858 containerd[1740]: time="2025-04-30T12:33:57.503745580Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:33:57.517645 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.523977820Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524057700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524074620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524091540Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524108100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524283180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524551820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524650860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524672260Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524686580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524699620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.524703 containerd[1740]: time="2025-04-30T12:33:57.524714940Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524727660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524742540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524758660Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524771700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524784100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524823020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524845860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524861700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524881660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524895500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524907540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524920300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524931580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525013 containerd[1740]: time="2025-04-30T12:33:57.524945580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.524958980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.524973780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.524985820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.524998860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525010740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525033580Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525055260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525068700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525079820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525128380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525148020Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525159100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:33:57.525239 containerd[1740]: time="2025-04-30T12:33:57.525170820Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:33:57.525477 containerd[1740]: time="2025-04-30T12:33:57.525180220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.525477 containerd[1740]: time="2025-04-30T12:33:57.525193140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:33:57.525477 containerd[1740]: time="2025-04-30T12:33:57.525203140Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:33:57.525477 containerd[1740]: time="2025-04-30T12:33:57.525212780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:33:57.527479 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:33:57.528367 containerd[1740]: time="2025-04-30T12:33:57.525680980Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:33:57.528367 containerd[1740]: time="2025-04-30T12:33:57.525740900Z" level=info msg="Connect containerd service" Apr 30 12:33:57.528367 containerd[1740]: time="2025-04-30T12:33:57.525787300Z" level=info msg="using legacy CRI server" Apr 30 12:33:57.528367 containerd[1740]: time="2025-04-30T12:33:57.525794740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:33:57.528367 containerd[1740]: time="2025-04-30T12:33:57.525916460Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.534647140Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.534980860Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535017540Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535056620Z" level=info msg="Start subscribing containerd event" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535092340Z" level=info msg="Start recovering state" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535155220Z" level=info msg="Start event monitor" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535166020Z" level=info msg="Start snapshots syncer" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535176380Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535183740Z" level=info msg="Start streaming server" Apr 30 12:33:57.535796 containerd[1740]: time="2025-04-30T12:33:57.535237180Z" level=info msg="containerd successfully booted in 0.114579s" Apr 30 12:33:57.536764 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:33:57.554831 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:33:57.570826 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 12:33:57.579809 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:33:57.612739 tar[1731]: linux-arm64/LICENSE Apr 30 12:33:57.612945 tar[1731]: linux-arm64/README.md Apr 30 12:33:57.627945 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:33:57.713942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:33:57.721537 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:33:57.721836 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:33:57.730587 systemd[1]: Startup finished in 736ms (kernel) + 13.083s (initrd) + 10.874s (userspace) = 24.695s. Apr 30 12:33:58.028727 login[1867]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:33:58.033878 login[1868]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:33:58.048699 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:33:58.056537 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:33:58.067329 systemd-logind[1709]: New session 1 of user core. Apr 30 12:33:58.073417 systemd-logind[1709]: New session 2 of user core. Apr 30 12:33:58.080040 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:33:58.087421 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:33:58.091601 (systemd)[1889]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:33:58.095033 systemd-logind[1709]: New session c1 of user core. Apr 30 12:33:58.207188 kubelet[1877]: E0430 12:33:58.207096 1877 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:33:58.209925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:33:58.210064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:33:58.211565 systemd[1]: kubelet.service: Consumed 715ms CPU time, 242M memory peak. Apr 30 12:33:58.270052 systemd[1889]: Queued start job for default target default.target. Apr 30 12:33:58.276469 systemd[1889]: Created slice app.slice - User Application Slice. Apr 30 12:33:58.276502 systemd[1889]: Reached target paths.target - Paths. Apr 30 12:33:58.276548 systemd[1889]: Reached target timers.target - Timers. Apr 30 12:33:58.277907 systemd[1889]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:33:58.287719 systemd[1889]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:33:58.287787 systemd[1889]: Reached target sockets.target - Sockets. Apr 30 12:33:58.287833 systemd[1889]: Reached target basic.target - Basic System. Apr 30 12:33:58.287861 systemd[1889]: Reached target default.target - Main User Target. Apr 30 12:33:58.287886 systemd[1889]: Startup finished in 182ms. Apr 30 12:33:58.288217 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:33:58.306591 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:33:58.308346 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:33:59.416446 waagent[1864]: 2025-04-30T12:33:59.415769Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 12:33:59.422125 waagent[1864]: 2025-04-30T12:33:59.422046Z INFO Daemon Daemon OS: flatcar 4230.1.1 Apr 30 12:33:59.427268 waagent[1864]: 2025-04-30T12:33:59.427202Z INFO Daemon Daemon Python: 3.11.11 Apr 30 12:33:59.432085 waagent[1864]: 2025-04-30T12:33:59.432020Z INFO Daemon Daemon Run daemon Apr 30 12:33:59.436530 waagent[1864]: 2025-04-30T12:33:59.436473Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.1' Apr 30 12:33:59.445861 waagent[1864]: 2025-04-30T12:33:59.445789Z INFO Daemon Daemon Using waagent for provisioning Apr 30 12:33:59.451686 waagent[1864]: 2025-04-30T12:33:59.451632Z INFO Daemon Daemon Activate resource disk Apr 30 12:33:59.456960 waagent[1864]: 2025-04-30T12:33:59.456902Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 12:33:59.470968 waagent[1864]: 2025-04-30T12:33:59.470893Z INFO Daemon Daemon Found device: None Apr 30 12:33:59.476058 waagent[1864]: 2025-04-30T12:33:59.475998Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 12:33:59.485207 waagent[1864]: 2025-04-30T12:33:59.485144Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 12:33:59.497671 waagent[1864]: 2025-04-30T12:33:59.497619Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 12:33:59.504300 waagent[1864]: 2025-04-30T12:33:59.504238Z INFO Daemon Daemon Running default provisioning handler Apr 30 12:33:59.516492 waagent[1864]: 2025-04-30T12:33:59.516116Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 12:33:59.530782 waagent[1864]: 2025-04-30T12:33:59.530708Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 12:33:59.540999 waagent[1864]: 2025-04-30T12:33:59.540929Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 12:33:59.546301 waagent[1864]: 2025-04-30T12:33:59.546244Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 12:33:59.656168 waagent[1864]: 2025-04-30T12:33:59.656063Z INFO Daemon Daemon Successfully mounted dvd Apr 30 12:33:59.671183 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 12:33:59.674209 waagent[1864]: 2025-04-30T12:33:59.674139Z INFO Daemon Daemon Detect protocol endpoint Apr 30 12:33:59.682511 waagent[1864]: 2025-04-30T12:33:59.679517Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 12:33:59.685538 waagent[1864]: 2025-04-30T12:33:59.685478Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 12:33:59.692529 waagent[1864]: 2025-04-30T12:33:59.692451Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 12:33:59.698293 waagent[1864]: 2025-04-30T12:33:59.698234Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 12:33:59.703647 waagent[1864]: 2025-04-30T12:33:59.703570Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 12:33:59.735202 waagent[1864]: 2025-04-30T12:33:59.735157Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 12:33:59.742412 waagent[1864]: 2025-04-30T12:33:59.742378Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 12:33:59.747846 waagent[1864]: 2025-04-30T12:33:59.747791Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 12:33:59.951872 waagent[1864]: 2025-04-30T12:33:59.951710Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 12:33:59.959099 waagent[1864]: 2025-04-30T12:33:59.959025Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 12:33:59.968478 waagent[1864]: 2025-04-30T12:33:59.968403Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 12:34:00.232694 waagent[1864]: 2025-04-30T12:34:00.232594Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 12:34:00.238766 waagent[1864]: 2025-04-30T12:34:00.238716Z INFO Daemon Apr 30 12:34:00.242274 waagent[1864]: 2025-04-30T12:34:00.242218Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 354d2af7-6aaa-4552-8ec1-c2396793909b eTag: 2067234220817747089 source: Fabric] Apr 30 12:34:00.254125 waagent[1864]: 2025-04-30T12:34:00.254076Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 12:34:00.261654 waagent[1864]: 2025-04-30T12:34:00.261605Z INFO Daemon Apr 30 12:34:00.264576 waagent[1864]: 2025-04-30T12:34:00.264524Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 12:34:00.276179 waagent[1864]: 2025-04-30T12:34:00.276140Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 12:34:00.385768 waagent[1864]: 2025-04-30T12:34:00.385661Z INFO Daemon Downloaded certificate {'thumbprint': '472619DFD74305FBA678EC82AA5DB19D82E6EA72', 'hasPrivateKey': False} Apr 30 12:34:00.396971 waagent[1864]: 2025-04-30T12:34:00.396914Z INFO Daemon Downloaded certificate {'thumbprint': '160314556E5323B443AFFF1F9116FBB790D8D79A', 'hasPrivateKey': True} Apr 30 12:34:00.408856 waagent[1864]: 2025-04-30T12:34:00.408806Z INFO Daemon Fetch goal state completed Apr 30 12:34:00.420988 waagent[1864]: 2025-04-30T12:34:00.420938Z INFO Daemon Daemon Starting provisioning Apr 30 12:34:00.426348 waagent[1864]: 2025-04-30T12:34:00.426279Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 12:34:00.431369 waagent[1864]: 2025-04-30T12:34:00.431317Z INFO Daemon Daemon Set hostname [ci-4230.1.1-a-9a970e7770] Apr 30 12:34:00.455443 waagent[1864]: 2025-04-30T12:34:00.454423Z INFO Daemon Daemon Publish hostname [ci-4230.1.1-a-9a970e7770] Apr 30 12:34:00.461207 waagent[1864]: 2025-04-30T12:34:00.461144Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 12:34:00.468013 waagent[1864]: 2025-04-30T12:34:00.467956Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 12:34:00.480374 systemd-networkd[1556]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:34:00.480382 systemd-networkd[1556]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:34:00.480409 systemd-networkd[1556]: eth0: DHCP lease lost Apr 30 12:34:00.481596 waagent[1864]: 2025-04-30T12:34:00.481524Z INFO Daemon Daemon Create user account if not exists Apr 30 12:34:00.487375 waagent[1864]: 2025-04-30T12:34:00.487280Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 12:34:00.497816 waagent[1864]: 2025-04-30T12:34:00.493415Z INFO Daemon Daemon Configure sudoer Apr 30 12:34:00.498586 waagent[1864]: 2025-04-30T12:34:00.498468Z INFO Daemon Daemon Configure sshd Apr 30 12:34:00.503265 waagent[1864]: 2025-04-30T12:34:00.503202Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 12:34:00.517021 waagent[1864]: 2025-04-30T12:34:00.516822Z INFO Daemon Daemon Deploy ssh public key. Apr 30 12:34:00.526541 systemd-networkd[1556]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 12:34:01.641453 waagent[1864]: 2025-04-30T12:34:01.639775Z INFO Daemon Daemon Provisioning complete Apr 30 12:34:01.658460 waagent[1864]: 2025-04-30T12:34:01.658396Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 12:34:01.665323 waagent[1864]: 2025-04-30T12:34:01.665258Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 12:34:01.676464 waagent[1864]: 2025-04-30T12:34:01.675660Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 12:34:01.817972 waagent[1948]: 2025-04-30T12:34:01.817871Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 12:34:01.818304 waagent[1948]: 2025-04-30T12:34:01.818038Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.1 Apr 30 12:34:01.818304 waagent[1948]: 2025-04-30T12:34:01.818093Z INFO ExtHandler ExtHandler Python: 3.11.11 Apr 30 12:34:02.196450 waagent[1948]: 2025-04-30T12:34:02.196338Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 12:34:02.196635 waagent[1948]: 2025-04-30T12:34:02.196596Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 12:34:02.196704 waagent[1948]: 2025-04-30T12:34:02.196673Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 12:34:02.205150 waagent[1948]: 2025-04-30T12:34:02.205065Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 12:34:02.211435 waagent[1948]: 2025-04-30T12:34:02.211370Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 12:34:02.211973 waagent[1948]: 2025-04-30T12:34:02.211923Z INFO ExtHandler Apr 30 12:34:02.212044 waagent[1948]: 2025-04-30T12:34:02.212009Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 83a51ed8-ecea-415e-bc68-093a9bce17f9 eTag: 2067234220817747089 source: Fabric] Apr 30 12:34:02.212327 waagent[1948]: 2025-04-30T12:34:02.212289Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 12:34:02.217917 waagent[1948]: 2025-04-30T12:34:02.217836Z INFO ExtHandler Apr 30 12:34:02.218021 waagent[1948]: 2025-04-30T12:34:02.217975Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 12:34:02.222310 waagent[1948]: 2025-04-30T12:34:02.222262Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 12:34:02.381487 waagent[1948]: 2025-04-30T12:34:02.380746Z INFO ExtHandler Downloaded certificate {'thumbprint': '472619DFD74305FBA678EC82AA5DB19D82E6EA72', 'hasPrivateKey': False} Apr 30 12:34:02.381487 waagent[1948]: 2025-04-30T12:34:02.381218Z INFO ExtHandler Downloaded certificate {'thumbprint': '160314556E5323B443AFFF1F9116FBB790D8D79A', 'hasPrivateKey': True} Apr 30 12:34:02.381736 waagent[1948]: 2025-04-30T12:34:02.381683Z INFO ExtHandler Fetch goal state completed Apr 30 12:34:02.401999 waagent[1948]: 2025-04-30T12:34:02.401940Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1948 Apr 30 12:34:02.402160 waagent[1948]: 2025-04-30T12:34:02.402126Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 12:34:02.403849 waagent[1948]: 2025-04-30T12:34:02.403798Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.1', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 12:34:02.404242 waagent[1948]: 2025-04-30T12:34:02.404206Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 12:34:02.492398 waagent[1948]: 2025-04-30T12:34:02.492291Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 12:34:02.492566 waagent[1948]: 2025-04-30T12:34:02.492524Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 12:34:02.498366 waagent[1948]: 2025-04-30T12:34:02.498321Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 12:34:02.504750 systemd[1]: Reload requested from client PID 1963 ('systemctl') (unit waagent.service)... Apr 30 12:34:02.505028 systemd[1]: Reloading... Apr 30 12:34:02.589467 zram_generator::config[2002]: No configuration found. Apr 30 12:34:02.700045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:34:02.802161 systemd[1]: Reloading finished in 296 ms. Apr 30 12:34:02.819462 waagent[1948]: 2025-04-30T12:34:02.814600Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 12:34:02.821713 systemd[1]: Reload requested from client PID 2056 ('systemctl') (unit waagent.service)... Apr 30 12:34:02.821726 systemd[1]: Reloading... Apr 30 12:34:02.911459 zram_generator::config[2092]: No configuration found. Apr 30 12:34:03.023501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:34:03.130283 systemd[1]: Reloading finished in 308 ms. Apr 30 12:34:03.149409 waagent[1948]: 2025-04-30T12:34:03.148639Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 12:34:03.149409 waagent[1948]: 2025-04-30T12:34:03.148814Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 12:34:03.711456 waagent[1948]: 2025-04-30T12:34:03.710941Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 12:34:03.711673 waagent[1948]: 2025-04-30T12:34:03.711596Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 12:34:03.712538 waagent[1948]: 2025-04-30T12:34:03.712449Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 12:34:03.712964 waagent[1948]: 2025-04-30T12:34:03.712853Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 12:34:03.713278 waagent[1948]: 2025-04-30T12:34:03.713168Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 12:34:03.713345 waagent[1948]: 2025-04-30T12:34:03.713268Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 12:34:03.713760 waagent[1948]: 2025-04-30T12:34:03.713658Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 12:34:03.713877 waagent[1948]: 2025-04-30T12:34:03.713761Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 12:34:03.715118 waagent[1948]: 2025-04-30T12:34:03.714713Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 12:34:03.715118 waagent[1948]: 2025-04-30T12:34:03.714804Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 12:34:03.715118 waagent[1948]: 2025-04-30T12:34:03.714951Z INFO EnvHandler ExtHandler Configure routes Apr 30 12:34:03.715118 waagent[1948]: 2025-04-30T12:34:03.715012Z INFO EnvHandler ExtHandler Gateway:None Apr 30 12:34:03.715244 waagent[1948]: 2025-04-30T12:34:03.715053Z INFO EnvHandler ExtHandler Routes:None Apr 30 12:34:03.715360 waagent[1948]: 2025-04-30T12:34:03.715319Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 12:34:03.716816 waagent[1948]: 2025-04-30T12:34:03.716775Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 12:34:03.717405 waagent[1948]: 2025-04-30T12:34:03.717191Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 12:34:03.717698 waagent[1948]: 2025-04-30T12:34:03.717645Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 12:34:03.721886 waagent[1948]: 2025-04-30T12:34:03.721830Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 12:34:03.721886 waagent[1948]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 12:34:03.721886 waagent[1948]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 12:34:03.721886 waagent[1948]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 12:34:03.721886 waagent[1948]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 12:34:03.721886 waagent[1948]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 12:34:03.721886 waagent[1948]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 12:34:03.754474 waagent[1948]: 2025-04-30T12:34:03.754291Z INFO ExtHandler ExtHandler Apr 30 12:34:03.754573 waagent[1948]: 2025-04-30T12:34:03.754474Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 54db2185-eefc-44cc-84e2-53d3ef74c299 correlation 9cc34070-4ef5-48c5-ae82-485a81d8c000 created: 2025-04-30T12:32:44.618164Z] Apr 30 12:34:03.754971 waagent[1948]: 2025-04-30T12:34:03.754917Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 12:34:03.756023 waagent[1948]: 2025-04-30T12:34:03.755963Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Apr 30 12:34:03.780840 waagent[1948]: 2025-04-30T12:34:03.780735Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 12:34:03.780840 waagent[1948]: Executing ['ip', '-a', '-o', 'link']: Apr 30 12:34:03.780840 waagent[1948]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 12:34:03.780840 waagent[1948]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:20:80 brd ff:ff:ff:ff:ff:ff Apr 30 12:34:03.780840 waagent[1948]: 3: enP48571s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:20:80 brd ff:ff:ff:ff:ff:ff\ altname enP48571p0s2 Apr 30 12:34:03.780840 waagent[1948]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 12:34:03.780840 waagent[1948]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 12:34:03.780840 waagent[1948]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 12:34:03.780840 waagent[1948]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 12:34:03.780840 waagent[1948]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 12:34:03.780840 waagent[1948]: 2: eth0 inet6 fe80::20d:3aff:fefc:2080/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 12:34:03.780840 waagent[1948]: 3: enP48571s1 inet6 fe80::20d:3aff:fefc:2080/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 12:34:03.804831 waagent[1948]: 2025-04-30T12:34:03.804672Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EBC42C26-D6D2-4421-8BB1-91D3CFFC9246;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 12:34:03.911946 waagent[1948]: 2025-04-30T12:34:03.911854Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 12:34:03.911946 waagent[1948]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:34:03.911946 waagent[1948]: pkts bytes target prot opt in out source destination Apr 30 12:34:03.911946 waagent[1948]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:34:03.911946 waagent[1948]: pkts bytes target prot opt in out source destination Apr 30 12:34:03.911946 waagent[1948]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:34:03.911946 waagent[1948]: pkts bytes target prot opt in out source destination Apr 30 12:34:03.911946 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 12:34:03.911946 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 12:34:03.911946 waagent[1948]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 12:34:03.915203 waagent[1948]: 2025-04-30T12:34:03.915123Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 12:34:03.915203 waagent[1948]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:34:03.915203 waagent[1948]: pkts bytes target prot opt in out source destination Apr 30 12:34:03.915203 waagent[1948]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:34:03.915203 waagent[1948]: pkts bytes target prot opt in out source destination Apr 30 12:34:03.915203 waagent[1948]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:34:03.915203 waagent[1948]: pkts bytes target prot opt in out source destination Apr 30 12:34:03.915203 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 12:34:03.915203 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 12:34:03.915203 waagent[1948]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 12:34:03.915491 waagent[1948]: 2025-04-30T12:34:03.915440Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 12:34:08.460824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:34:08.471622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:34:08.585377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:34:08.598830 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:34:08.644153 kubelet[2191]: E0430 12:34:08.644091 2191 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:34:08.648061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:34:08.648353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:34:08.648908 systemd[1]: kubelet.service: Consumed 129ms CPU time, 96.9M memory peak. Apr 30 12:34:18.679361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:34:18.686679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:34:18.775695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:34:18.780144 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:34:18.823156 kubelet[2207]: E0430 12:34:18.823081 2207 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:34:18.825611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:34:18.825760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:34:18.826121 systemd[1]: kubelet.service: Consumed 128ms CPU time, 96.9M memory peak. Apr 30 12:34:20.232641 chronyd[1694]: Selected source PHC0 Apr 30 12:34:28.929336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 12:34:28.939718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:34:29.035037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:34:29.039307 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:34:29.083408 kubelet[2223]: E0430 12:34:29.083329 2223 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:34:29.086236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:34:29.086536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:34:29.087109 systemd[1]: kubelet.service: Consumed 135ms CPU time, 96.3M memory peak. Apr 30 12:34:31.680414 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:34:31.682520 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:58002.service - OpenSSH per-connection server daemon (10.200.16.10:58002). Apr 30 12:34:32.258182 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 58002 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:34:32.259616 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:34:32.264290 systemd-logind[1709]: New session 3 of user core. Apr 30 12:34:32.270641 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:34:32.660501 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:58016.service - OpenSSH per-connection server daemon (10.200.16.10:58016). Apr 30 12:34:33.119418 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 58016 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:34:33.120777 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:34:33.126497 systemd-logind[1709]: New session 4 of user core. Apr 30 12:34:33.132680 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:34:33.441610 sshd[2239]: Connection closed by 10.200.16.10 port 58016 Apr 30 12:34:33.442204 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Apr 30 12:34:33.445686 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:58016.service: Deactivated successfully. Apr 30 12:34:33.447300 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:34:33.448510 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:34:33.449421 systemd-logind[1709]: Removed session 4. Apr 30 12:34:33.532684 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:58030.service - OpenSSH per-connection server daemon (10.200.16.10:58030). Apr 30 12:34:33.977262 sshd[2245]: Accepted publickey for core from 10.200.16.10 port 58030 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:34:33.978645 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:34:33.983634 systemd-logind[1709]: New session 5 of user core. Apr 30 12:34:33.989595 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:34:34.297035 sshd[2247]: Connection closed by 10.200.16.10 port 58030 Apr 30 12:34:34.297613 sshd-session[2245]: pam_unix(sshd:session): session closed for user core Apr 30 12:34:34.301306 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:58030.service: Deactivated successfully. Apr 30 12:34:34.303031 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:34:34.304624 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:34:34.305794 systemd-logind[1709]: Removed session 5. Apr 30 12:34:34.391883 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:58040.service - OpenSSH per-connection server daemon (10.200.16.10:58040). Apr 30 12:34:34.868789 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 58040 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:34:34.870106 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:34:34.875499 systemd-logind[1709]: New session 6 of user core. Apr 30 12:34:34.880821 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:34:35.207457 sshd[2255]: Connection closed by 10.200.16.10 port 58040 Apr 30 12:34:35.207994 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Apr 30 12:34:35.212385 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:58040.service: Deactivated successfully. Apr 30 12:34:35.214366 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:34:35.215329 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:34:35.216215 systemd-logind[1709]: Removed session 6. Apr 30 12:34:35.298722 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:58048.service - OpenSSH per-connection server daemon (10.200.16.10:58048). Apr 30 12:34:35.776137 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 58048 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:34:35.777369 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:34:35.782353 systemd-logind[1709]: New session 7 of user core. Apr 30 12:34:35.788603 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:34:36.151377 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:34:36.151681 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:34:36.185467 sudo[2264]: pam_unix(sudo:session): session closed for user root Apr 30 12:34:36.255471 sshd[2263]: Connection closed by 10.200.16.10 port 58048 Apr 30 12:34:36.256314 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Apr 30 12:34:36.260290 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:34:36.260933 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:58048.service: Deactivated successfully. Apr 30 12:34:36.262934 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:34:36.265289 systemd-logind[1709]: Removed session 7. Apr 30 12:34:36.346717 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:58060.service - OpenSSH per-connection server daemon (10.200.16.10:58060). Apr 30 12:34:36.825568 sshd[2270]: Accepted publickey for core from 10.200.16.10 port 58060 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:34:36.826910 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:34:36.832405 systemd-logind[1709]: New session 8 of user core. Apr 30 12:34:36.837621 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:34:37.096165 sudo[2274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:34:37.096467 sudo[2274]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:34:37.100663 sudo[2274]: pam_unix(sudo:session): session closed for user root Apr 30 12:34:37.106138 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:34:37.106470 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:34:37.129763 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:34:37.154847 augenrules[2296]: No rules Apr 30 12:34:37.156518 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:34:37.157515 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:34:37.159181 sudo[2273]: pam_unix(sudo:session): session closed for user root Apr 30 12:34:37.229259 sshd[2272]: Connection closed by 10.200.16.10 port 58060 Apr 30 12:34:37.230087 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Apr 30 12:34:37.234021 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:34:37.234219 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:58060.service: Deactivated successfully. Apr 30 12:34:37.237160 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:34:37.238675 systemd-logind[1709]: Removed session 8. Apr 30 12:34:37.311413 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:58074.service - OpenSSH per-connection server daemon (10.200.16.10:58074). Apr 30 12:34:37.765747 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 58074 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:34:37.767174 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:34:37.771576 systemd-logind[1709]: New session 9 of user core. Apr 30 12:34:37.777640 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:34:38.020054 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:34:38.020351 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:34:39.179213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 12:34:39.186722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:34:39.289776 (dockerd)[2328]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:34:39.290068 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:34:39.735591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:34:39.747936 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:34:39.799212 kubelet[2334]: E0430 12:34:39.799124 2334 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:34:39.802310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:34:39.802685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:34:39.803262 systemd[1]: kubelet.service: Consumed 144ms CPU time, 96.4M memory peak. Apr 30 12:34:40.492405 dockerd[2328]: time="2025-04-30T12:34:40.491956840Z" level=info msg="Starting up" Apr 30 12:34:40.730291 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport502195460-merged.mount: Deactivated successfully. Apr 30 12:34:40.760129 dockerd[2328]: time="2025-04-30T12:34:40.759801244Z" level=info msg="Loading containers: start." Apr 30 12:34:40.894510 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 30 12:34:40.938472 kernel: Initializing XFRM netlink socket Apr 30 12:34:41.076641 systemd-networkd[1556]: docker0: Link UP Apr 30 12:34:41.113780 dockerd[2328]: time="2025-04-30T12:34:41.113729674Z" level=info msg="Loading containers: done." Apr 30 12:34:41.138105 dockerd[2328]: time="2025-04-30T12:34:41.137630453Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:34:41.138105 dockerd[2328]: time="2025-04-30T12:34:41.137757613Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:34:41.138105 dockerd[2328]: time="2025-04-30T12:34:41.137891213Z" level=info msg="Daemon has completed initialization" Apr 30 12:34:41.188618 dockerd[2328]: time="2025-04-30T12:34:41.188556172Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:34:41.188894 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:34:41.724772 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2766877448-merged.mount: Deactivated successfully. Apr 30 12:34:42.038160 update_engine[1714]: I20250430 12:34:42.037465 1714 update_attempter.cc:509] Updating boot flags... Apr 30 12:34:42.094646 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2540) Apr 30 12:34:42.892728 containerd[1740]: time="2025-04-30T12:34:42.892676592Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 12:34:43.770239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298476449.mount: Deactivated successfully. Apr 30 12:34:45.712498 containerd[1740]: time="2025-04-30T12:34:45.712412070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:45.716165 containerd[1740]: time="2025-04-30T12:34:45.716114392Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" Apr 30 12:34:45.720241 containerd[1740]: time="2025-04-30T12:34:45.720207795Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:45.725737 containerd[1740]: time="2025-04-30T12:34:45.725681359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:45.728340 containerd[1740]: time="2025-04-30T12:34:45.728264521Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.835543289s" Apr 30 12:34:45.728340 containerd[1740]: time="2025-04-30T12:34:45.728327161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 12:34:45.750260 containerd[1740]: time="2025-04-30T12:34:45.750219096Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 12:34:48.032899 containerd[1740]: time="2025-04-30T12:34:48.032826298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:48.036254 containerd[1740]: time="2025-04-30T12:34:48.035998460Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" Apr 30 12:34:48.038729 containerd[1740]: time="2025-04-30T12:34:48.038666222Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:48.045901 containerd[1740]: time="2025-04-30T12:34:48.045835226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:48.047196 containerd[1740]: time="2025-04-30T12:34:48.047043827Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.296782171s" Apr 30 12:34:48.047196 containerd[1740]: time="2025-04-30T12:34:48.047081987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 12:34:48.067966 containerd[1740]: time="2025-04-30T12:34:48.067725921Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 12:34:49.434477 containerd[1740]: time="2025-04-30T12:34:49.433591616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:49.436104 containerd[1740]: time="2025-04-30T12:34:49.436048538Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" Apr 30 12:34:49.441468 containerd[1740]: time="2025-04-30T12:34:49.441398181Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:49.446858 containerd[1740]: time="2025-04-30T12:34:49.446802505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:49.447989 containerd[1740]: time="2025-04-30T12:34:49.447862466Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.380096585s" Apr 30 12:34:49.447989 containerd[1740]: time="2025-04-30T12:34:49.447898906Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 12:34:49.469169 containerd[1740]: time="2025-04-30T12:34:49.469128360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 12:34:49.929075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 12:34:49.934645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:34:50.023884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:34:50.028171 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:34:50.067156 kubelet[2676]: E0430 12:34:50.067095 2676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:34:50.069705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:34:50.069973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:34:50.070566 systemd[1]: kubelet.service: Consumed 125ms CPU time, 94.8M memory peak. Apr 30 12:34:50.781714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446889449.mount: Deactivated successfully. Apr 30 12:34:51.865474 containerd[1740]: time="2025-04-30T12:34:51.865050679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:51.868953 containerd[1740]: time="2025-04-30T12:34:51.868887202Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" Apr 30 12:34:51.871671 containerd[1740]: time="2025-04-30T12:34:51.871616244Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:51.875450 containerd[1740]: time="2025-04-30T12:34:51.875330126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:51.876225 containerd[1740]: time="2025-04-30T12:34:51.876073487Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 2.406904887s" Apr 30 12:34:51.876225 containerd[1740]: time="2025-04-30T12:34:51.876113407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 12:34:51.898913 containerd[1740]: time="2025-04-30T12:34:51.898872143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 12:34:52.573311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414607687.mount: Deactivated successfully. Apr 30 12:34:53.559975 containerd[1740]: time="2025-04-30T12:34:53.559917910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:53.565212 containerd[1740]: time="2025-04-30T12:34:53.565157313Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Apr 30 12:34:53.570101 containerd[1740]: time="2025-04-30T12:34:53.570046317Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:53.575416 containerd[1740]: time="2025-04-30T12:34:53.575347480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:53.576843 containerd[1740]: time="2025-04-30T12:34:53.576373321Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.677460258s" Apr 30 12:34:53.576843 containerd[1740]: time="2025-04-30T12:34:53.576409681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 12:34:53.597286 containerd[1740]: time="2025-04-30T12:34:53.597243735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 12:34:54.218872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3103879782.mount: Deactivated successfully. Apr 30 12:34:54.244887 containerd[1740]: time="2025-04-30T12:34:54.244826613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:54.247903 containerd[1740]: time="2025-04-30T12:34:54.247616615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Apr 30 12:34:54.253690 containerd[1740]: time="2025-04-30T12:34:54.253624699Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:54.261208 containerd[1740]: time="2025-04-30T12:34:54.261114184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:54.262309 containerd[1740]: time="2025-04-30T12:34:54.262112145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 664.82497ms" Apr 30 12:34:54.262309 containerd[1740]: time="2025-04-30T12:34:54.262159865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 12:34:54.287223 containerd[1740]: time="2025-04-30T12:34:54.286953242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 12:34:54.950283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484104948.mount: Deactivated successfully. Apr 30 12:34:59.328955 containerd[1740]: time="2025-04-30T12:34:59.328904255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:59.331363 containerd[1740]: time="2025-04-30T12:34:59.331310857Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Apr 30 12:34:59.336203 containerd[1740]: time="2025-04-30T12:34:59.336140820Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:59.343045 containerd[1740]: time="2025-04-30T12:34:59.342977345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:34:59.344308 containerd[1740]: time="2025-04-30T12:34:59.344170666Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.057140024s" Apr 30 12:34:59.344308 containerd[1740]: time="2025-04-30T12:34:59.344207466Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 12:35:00.073382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 12:35:00.080704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:35:00.211347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:35:00.217120 (kubelet)[2852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:35:00.275508 kubelet[2852]: E0430 12:35:00.275161 2852 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:35:00.278821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:35:00.278990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:35:00.279362 systemd[1]: kubelet.service: Consumed 142ms CPU time, 96.5M memory peak. Apr 30 12:35:04.019708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:35:04.019869 systemd[1]: kubelet.service: Consumed 142ms CPU time, 96.5M memory peak. Apr 30 12:35:04.026696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:35:04.048152 systemd[1]: Reload requested from client PID 2881 ('systemctl') (unit session-9.scope)... Apr 30 12:35:04.048173 systemd[1]: Reloading... Apr 30 12:35:04.189664 zram_generator::config[2931]: No configuration found. Apr 30 12:35:04.283776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:35:04.388352 systemd[1]: Reloading finished in 339 ms. Apr 30 12:35:04.516010 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 12:35:04.516095 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 12:35:04.516560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:35:04.516823 systemd[1]: kubelet.service: Consumed 78ms CPU time, 81.4M memory peak. Apr 30 12:35:04.524972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:35:04.628453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:35:04.638024 (kubelet)[2994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:35:04.681630 kubelet[2994]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:35:04.681990 kubelet[2994]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:35:04.682080 kubelet[2994]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:35:04.683755 kubelet[2994]: I0430 12:35:04.683698 2994 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:35:05.701459 kubelet[2994]: I0430 12:35:05.700716 2994 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:35:05.701459 kubelet[2994]: I0430 12:35:05.700750 2994 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:35:05.701459 kubelet[2994]: I0430 12:35:05.700979 2994 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:35:05.741598 kubelet[2994]: E0430 12:35:05.741567 2994 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.742018 kubelet[2994]: I0430 12:35:05.741912 2994 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:35:05.753119 kubelet[2994]: I0430 12:35:05.753088 2994 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:35:05.754718 kubelet[2994]: I0430 12:35:05.754671 2994 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:35:05.754918 kubelet[2994]: I0430 12:35:05.754724 2994 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-9a970e7770","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:35:05.755012 kubelet[2994]: I0430 12:35:05.754922 2994 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:35:05.755012 kubelet[2994]: I0430 12:35:05.754930 2994 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:35:05.755092 kubelet[2994]: I0430 12:35:05.755070 2994 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:35:05.756781 kubelet[2994]: I0430 12:35:05.756759 2994 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:35:05.757295 kubelet[2994]: W0430 12:35:05.757253 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-9a970e7770&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.757335 kubelet[2994]: E0430 12:35:05.757305 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-9a970e7770&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.757335 kubelet[2994]: I0430 12:35:05.757324 2994 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:35:05.757388 kubelet[2994]: I0430 12:35:05.757364 2994 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:35:05.757388 kubelet[2994]: I0430 12:35:05.757375 2994 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:35:05.758461 kubelet[2994]: I0430 12:35:05.758178 2994 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:35:05.758461 kubelet[2994]: I0430 12:35:05.758352 2994 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:35:05.758461 kubelet[2994]: W0430 12:35:05.758394 2994 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:35:05.759044 kubelet[2994]: I0430 12:35:05.759009 2994 server.go:1264] "Started kubelet" Apr 30 12:35:05.759184 kubelet[2994]: W0430 12:35:05.759139 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.759226 kubelet[2994]: E0430 12:35:05.759189 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.767223 kubelet[2994]: I0430 12:35:05.767184 2994 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:35:05.769182 kubelet[2994]: I0430 12:35:05.767845 2994 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:35:05.769182 kubelet[2994]: I0430 12:35:05.768235 2994 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:35:05.769182 kubelet[2994]: E0430 12:35:05.768926 2994 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-a-9a970e7770.183b18c2ddbbb760 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-9a970e7770,UID:ci-4230.1.1-a-9a970e7770,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-9a970e7770,},FirstTimestamp:2025-04-30 12:35:05.758988128 +0000 UTC m=+1.117583333,LastTimestamp:2025-04-30 12:35:05.758988128 +0000 UTC m=+1.117583333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-9a970e7770,}" Apr 30 12:35:05.771359 kubelet[2994]: I0430 12:35:05.769725 2994 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:35:05.772770 kubelet[2994]: I0430 12:35:05.772595 2994 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:35:05.776796 kubelet[2994]: E0430 12:35:05.776500 2994 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-9a970e7770\" not found" Apr 30 12:35:05.776924 kubelet[2994]: I0430 12:35:05.776813 2994 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:35:05.777492 kubelet[2994]: I0430 12:35:05.777009 2994 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:35:05.777492 kubelet[2994]: I0430 12:35:05.777079 2994 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:35:05.777605 kubelet[2994]: W0430 12:35:05.777510 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.777605 kubelet[2994]: E0430 12:35:05.777557 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.777775 kubelet[2994]: E0430 12:35:05.777735 2994 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:35:05.779262 kubelet[2994]: E0430 12:35:05.779197 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-9a970e7770?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" Apr 30 12:35:05.779604 kubelet[2994]: I0430 12:35:05.779574 2994 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:35:05.779695 kubelet[2994]: I0430 12:35:05.779671 2994 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:35:05.782591 kubelet[2994]: I0430 12:35:05.782552 2994 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:35:05.810120 kubelet[2994]: I0430 12:35:05.810096 2994 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:35:05.810459 kubelet[2994]: I0430 12:35:05.810252 2994 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:35:05.810889 kubelet[2994]: I0430 12:35:05.810864 2994 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:35:05.814939 kubelet[2994]: I0430 12:35:05.814624 2994 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:35:05.816838 kubelet[2994]: I0430 12:35:05.816722 2994 policy_none.go:49] "None policy: Start" Apr 30 12:35:05.817681 kubelet[2994]: I0430 12:35:05.817642 2994 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:35:05.817681 kubelet[2994]: I0430 12:35:05.817680 2994 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:35:05.817789 kubelet[2994]: I0430 12:35:05.817698 2994 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:35:05.817789 kubelet[2994]: E0430 12:35:05.817743 2994 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:35:05.818179 kubelet[2994]: I0430 12:35:05.818097 2994 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:35:05.818179 kubelet[2994]: I0430 12:35:05.818123 2994 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:35:05.819634 kubelet[2994]: W0430 12:35:05.818870 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.819634 kubelet[2994]: E0430 12:35:05.818902 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:05.827602 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:35:05.837333 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:35:05.841213 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:35:05.850789 kubelet[2994]: I0430 12:35:05.850354 2994 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:35:05.850789 kubelet[2994]: I0430 12:35:05.850572 2994 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:35:05.850789 kubelet[2994]: I0430 12:35:05.850667 2994 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:35:05.852729 kubelet[2994]: E0430 12:35:05.852576 2994 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-a-9a970e7770\" not found" Apr 30 12:35:05.879439 kubelet[2994]: I0430 12:35:05.879390 2994 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.880031 kubelet[2994]: E0430 12:35:05.880004 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.918292 kubelet[2994]: I0430 12:35:05.918230 2994 topology_manager.go:215] "Topology Admit Handler" podUID="19a9c8c5a40532303383093972e728be" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.920040 kubelet[2994]: I0430 12:35:05.920001 2994 topology_manager.go:215] "Topology Admit Handler" podUID="44001eb3d1ee727455ff9aebc450a56f" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.922452 kubelet[2994]: I0430 12:35:05.922301 2994 topology_manager.go:215] "Topology Admit Handler" podUID="f39dda14a5bacb5c2f45a158fe99569f" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.930524 systemd[1]: Created slice kubepods-burstable-pod19a9c8c5a40532303383093972e728be.slice - libcontainer container kubepods-burstable-pod19a9c8c5a40532303383093972e728be.slice. Apr 30 12:35:05.948925 systemd[1]: Created slice kubepods-burstable-pod44001eb3d1ee727455ff9aebc450a56f.slice - libcontainer container kubepods-burstable-pod44001eb3d1ee727455ff9aebc450a56f.slice. Apr 30 12:35:05.954165 systemd[1]: Created slice kubepods-burstable-podf39dda14a5bacb5c2f45a158fe99569f.slice - libcontainer container kubepods-burstable-podf39dda14a5bacb5c2f45a158fe99569f.slice. Apr 30 12:35:05.977538 kubelet[2994]: I0430 12:35:05.977501 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f39dda14a5bacb5c2f45a158fe99569f-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-9a970e7770\" (UID: \"f39dda14a5bacb5c2f45a158fe99569f\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977671 kubelet[2994]: I0430 12:35:05.977547 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a9c8c5a40532303383093972e728be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-9a970e7770\" (UID: \"19a9c8c5a40532303383093972e728be\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977671 kubelet[2994]: I0430 12:35:05.977571 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977671 kubelet[2994]: I0430 12:35:05.977587 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977671 kubelet[2994]: I0430 12:35:05.977601 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a9c8c5a40532303383093972e728be-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-9a970e7770\" (UID: \"19a9c8c5a40532303383093972e728be\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977671 kubelet[2994]: I0430 12:35:05.977629 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a9c8c5a40532303383093972e728be-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-9a970e7770\" (UID: \"19a9c8c5a40532303383093972e728be\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977779 kubelet[2994]: I0430 12:35:05.977643 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977779 kubelet[2994]: I0430 12:35:05.977656 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.977779 kubelet[2994]: I0430 12:35:05.977671 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:05.979915 kubelet[2994]: E0430 12:35:05.979881 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-9a970e7770?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" Apr 30 12:35:06.081829 kubelet[2994]: I0430 12:35:06.081796 2994 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:06.082196 kubelet[2994]: E0430 12:35:06.082163 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:06.246793 containerd[1740]: time="2025-04-30T12:35:06.246674648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-9a970e7770,Uid:19a9c8c5a40532303383093972e728be,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:06.253531 containerd[1740]: time="2025-04-30T12:35:06.253394772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-9a970e7770,Uid:44001eb3d1ee727455ff9aebc450a56f,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:06.259125 containerd[1740]: time="2025-04-30T12:35:06.258998896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-9a970e7770,Uid:f39dda14a5bacb5c2f45a158fe99569f,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:06.381173 kubelet[2994]: E0430 12:35:06.381123 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-9a970e7770?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" Apr 30 12:35:06.484371 kubelet[2994]: I0430 12:35:06.484227 2994 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:06.484605 kubelet[2994]: E0430 12:35:06.484571 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:06.880815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601393321.mount: Deactivated successfully. Apr 30 12:35:06.913053 containerd[1740]: time="2025-04-30T12:35:06.912982884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:35:06.925503 containerd[1740]: time="2025-04-30T12:35:06.925410772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 12:35:06.929967 containerd[1740]: time="2025-04-30T12:35:06.929924655Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:35:06.937201 containerd[1740]: time="2025-04-30T12:35:06.936152259Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:35:06.945918 containerd[1740]: time="2025-04-30T12:35:06.945628785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:35:06.951827 containerd[1740]: time="2025-04-30T12:35:06.951786309Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:35:06.954528 containerd[1740]: time="2025-04-30T12:35:06.954488631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:35:06.955659 containerd[1740]: time="2025-04-30T12:35:06.955616512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 708.851424ms" Apr 30 12:35:06.958554 containerd[1740]: time="2025-04-30T12:35:06.958495074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:35:06.964801 containerd[1740]: time="2025-04-30T12:35:06.964609918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 711.109746ms" Apr 30 12:35:06.999003 containerd[1740]: time="2025-04-30T12:35:06.998944340Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 739.843204ms" Apr 30 12:35:07.024334 kubelet[2994]: W0430 12:35:07.024219 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.024334 kubelet[2994]: E0430 12:35:07.024267 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.181886 kubelet[2994]: E0430 12:35:07.181832 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-9a970e7770?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" Apr 30 12:35:07.206567 kubelet[2994]: W0430 12:35:07.206503 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-9a970e7770&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.206567 kubelet[2994]: E0430 12:35:07.206570 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-9a970e7770&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.278996 kubelet[2994]: W0430 12:35:07.278927 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.278996 kubelet[2994]: E0430 12:35:07.278998 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.286474 kubelet[2994]: I0430 12:35:07.286420 2994 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:07.286868 kubelet[2994]: E0430 12:35:07.286833 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:07.379837 kubelet[2994]: W0430 12:35:07.379798 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.379837 kubelet[2994]: E0430 12:35:07.379839 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.788346 kubelet[2994]: E0430 12:35:07.788306 2994 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.24:6443: connect: connection refused Apr 30 12:35:07.846161 containerd[1740]: time="2025-04-30T12:35:07.845929575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:07.846645 containerd[1740]: time="2025-04-30T12:35:07.846467215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:07.846645 containerd[1740]: time="2025-04-30T12:35:07.846570255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:07.846645 containerd[1740]: time="2025-04-30T12:35:07.846601855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:07.847256 containerd[1740]: time="2025-04-30T12:35:07.846989976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:07.847256 containerd[1740]: time="2025-04-30T12:35:07.847046096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:07.848305 containerd[1740]: time="2025-04-30T12:35:07.848190777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:07.849351 containerd[1740]: time="2025-04-30T12:35:07.848813257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:07.849651 containerd[1740]: time="2025-04-30T12:35:07.849596657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:07.849778 containerd[1740]: time="2025-04-30T12:35:07.849755098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:07.850123 containerd[1740]: time="2025-04-30T12:35:07.850068578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:07.851018 containerd[1740]: time="2025-04-30T12:35:07.850937018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:07.889666 systemd[1]: Started cri-containerd-11f47bfaccc409918faeb489fe13739cf212c667b9d37db6420ba517029c5137.scope - libcontainer container 11f47bfaccc409918faeb489fe13739cf212c667b9d37db6420ba517029c5137. Apr 30 12:35:07.892121 systemd[1]: Started cri-containerd-6ae0d92d65ac0d7fc218b750e6a018dc05291a2a5932e26be2251a7e01215c9b.scope - libcontainer container 6ae0d92d65ac0d7fc218b750e6a018dc05291a2a5932e26be2251a7e01215c9b. Apr 30 12:35:07.895571 systemd[1]: Started cri-containerd-ccb7d167e1f08390ed87d67b756d00e51130dd787d3e1aad6d976ab465a28c03.scope - libcontainer container ccb7d167e1f08390ed87d67b756d00e51130dd787d3e1aad6d976ab465a28c03. Apr 30 12:35:07.955903 containerd[1740]: time="2025-04-30T12:35:07.955833647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-9a970e7770,Uid:f39dda14a5bacb5c2f45a158fe99569f,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f47bfaccc409918faeb489fe13739cf212c667b9d37db6420ba517029c5137\"" Apr 30 12:35:07.961399 containerd[1740]: time="2025-04-30T12:35:07.961266331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-9a970e7770,Uid:19a9c8c5a40532303383093972e728be,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae0d92d65ac0d7fc218b750e6a018dc05291a2a5932e26be2251a7e01215c9b\"" Apr 30 12:35:07.964882 containerd[1740]: time="2025-04-30T12:35:07.964780093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-9a970e7770,Uid:44001eb3d1ee727455ff9aebc450a56f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccb7d167e1f08390ed87d67b756d00e51130dd787d3e1aad6d976ab465a28c03\"" Apr 30 12:35:07.967550 containerd[1740]: time="2025-04-30T12:35:07.966569334Z" level=info msg="CreateContainer within sandbox \"11f47bfaccc409918faeb489fe13739cf212c667b9d37db6420ba517029c5137\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:35:07.969361 containerd[1740]: time="2025-04-30T12:35:07.969317216Z" level=info msg="CreateContainer within sandbox \"6ae0d92d65ac0d7fc218b750e6a018dc05291a2a5932e26be2251a7e01215c9b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:35:07.970994 containerd[1740]: time="2025-04-30T12:35:07.970947297Z" level=info msg="CreateContainer within sandbox \"ccb7d167e1f08390ed87d67b756d00e51130dd787d3e1aad6d976ab465a28c03\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:35:08.011134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867146806.mount: Deactivated successfully. Apr 30 12:35:08.045696 containerd[1740]: time="2025-04-30T12:35:08.045462826Z" level=info msg="CreateContainer within sandbox \"11f47bfaccc409918faeb489fe13739cf212c667b9d37db6420ba517029c5137\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"258e142159e246bb014ffa6e8038380abcc53d8a7474a56994d210fe517ebd0f\"" Apr 30 12:35:08.046732 containerd[1740]: time="2025-04-30T12:35:08.046698227Z" level=info msg="StartContainer for \"258e142159e246bb014ffa6e8038380abcc53d8a7474a56994d210fe517ebd0f\"" Apr 30 12:35:08.070229 containerd[1740]: time="2025-04-30T12:35:08.070163642Z" level=info msg="CreateContainer within sandbox \"6ae0d92d65ac0d7fc218b750e6a018dc05291a2a5932e26be2251a7e01215c9b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b6a71a60683ffcc988b15324ee726984f269737da16430f221d6b51fc522819\"" Apr 30 12:35:08.071457 containerd[1740]: time="2025-04-30T12:35:08.071012282Z" level=info msg="StartContainer for \"4b6a71a60683ffcc988b15324ee726984f269737da16430f221d6b51fc522819\"" Apr 30 12:35:08.072364 containerd[1740]: time="2025-04-30T12:35:08.072334843Z" level=info msg="CreateContainer within sandbox \"ccb7d167e1f08390ed87d67b756d00e51130dd787d3e1aad6d976ab465a28c03\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b42ff6b3d51e5b2e8560a4a0497c2498308cf50c49cb5fc7b95a3aab09f246c4\"" Apr 30 12:35:08.073236 containerd[1740]: time="2025-04-30T12:35:08.073199724Z" level=info msg="StartContainer for \"b42ff6b3d51e5b2e8560a4a0497c2498308cf50c49cb5fc7b95a3aab09f246c4\"" Apr 30 12:35:08.075820 systemd[1]: Started cri-containerd-258e142159e246bb014ffa6e8038380abcc53d8a7474a56994d210fe517ebd0f.scope - libcontainer container 258e142159e246bb014ffa6e8038380abcc53d8a7474a56994d210fe517ebd0f. Apr 30 12:35:08.113867 systemd[1]: Started cri-containerd-4b6a71a60683ffcc988b15324ee726984f269737da16430f221d6b51fc522819.scope - libcontainer container 4b6a71a60683ffcc988b15324ee726984f269737da16430f221d6b51fc522819. Apr 30 12:35:08.135676 systemd[1]: Started cri-containerd-b42ff6b3d51e5b2e8560a4a0497c2498308cf50c49cb5fc7b95a3aab09f246c4.scope - libcontainer container b42ff6b3d51e5b2e8560a4a0497c2498308cf50c49cb5fc7b95a3aab09f246c4. Apr 30 12:35:08.145184 containerd[1740]: time="2025-04-30T12:35:08.144935051Z" level=info msg="StartContainer for \"258e142159e246bb014ffa6e8038380abcc53d8a7474a56994d210fe517ebd0f\" returns successfully" Apr 30 12:35:08.186998 containerd[1740]: time="2025-04-30T12:35:08.186854718Z" level=info msg="StartContainer for \"4b6a71a60683ffcc988b15324ee726984f269737da16430f221d6b51fc522819\" returns successfully" Apr 30 12:35:08.196321 containerd[1740]: time="2025-04-30T12:35:08.196178044Z" level=info msg="StartContainer for \"b42ff6b3d51e5b2e8560a4a0497c2498308cf50c49cb5fc7b95a3aab09f246c4\" returns successfully" Apr 30 12:35:08.878862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849030397.mount: Deactivated successfully. Apr 30 12:35:08.889698 kubelet[2994]: I0430 12:35:08.889648 2994 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:10.514266 kubelet[2994]: E0430 12:35:10.514224 2994 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-a-9a970e7770\" not found" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:10.590422 kubelet[2994]: I0430 12:35:10.590380 2994 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:10.761035 kubelet[2994]: I0430 12:35:10.760722 2994 apiserver.go:52] "Watching apiserver" Apr 30 12:35:10.777544 kubelet[2994]: I0430 12:35:10.777396 2994 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:35:12.612716 systemd[1]: Reload requested from client PID 3268 ('systemctl') (unit session-9.scope)... Apr 30 12:35:12.612732 systemd[1]: Reloading... Apr 30 12:35:12.718482 zram_generator::config[3318]: No configuration found. Apr 30 12:35:12.833128 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:35:12.950203 systemd[1]: Reloading finished in 337 ms. Apr 30 12:35:12.974436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:35:12.987811 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:35:12.988033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:35:12.988086 systemd[1]: kubelet.service: Consumed 1.470s CPU time, 113.8M memory peak. Apr 30 12:35:12.993739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:35:13.161811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:35:13.175095 (kubelet)[3379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:35:13.467609 kubelet[3379]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:35:13.467609 kubelet[3379]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:35:13.467609 kubelet[3379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:35:13.467609 kubelet[3379]: I0430 12:35:13.225566 3379 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:35:13.467609 kubelet[3379]: I0430 12:35:13.230279 3379 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:35:13.467609 kubelet[3379]: I0430 12:35:13.230300 3379 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:35:13.467609 kubelet[3379]: I0430 12:35:13.230519 3379 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:35:13.469105 kubelet[3379]: I0430 12:35:13.469079 3379 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:35:13.471057 kubelet[3379]: I0430 12:35:13.470462 3379 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:35:13.479231 kubelet[3379]: I0430 12:35:13.479193 3379 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:35:13.479458 kubelet[3379]: I0430 12:35:13.479401 3379 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:35:13.479649 kubelet[3379]: I0430 12:35:13.479458 3379 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-9a970e7770","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:35:13.479738 kubelet[3379]: I0430 12:35:13.479653 3379 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:35:13.479738 kubelet[3379]: I0430 12:35:13.479662 3379 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:35:13.479738 kubelet[3379]: I0430 12:35:13.479705 3379 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:35:13.479833 kubelet[3379]: I0430 12:35:13.479819 3379 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:35:13.479863 kubelet[3379]: I0430 12:35:13.479834 3379 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:35:13.479863 kubelet[3379]: I0430 12:35:13.479862 3379 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:35:13.481107 kubelet[3379]: I0430 12:35:13.479878 3379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:35:13.481359 kubelet[3379]: I0430 12:35:13.481330 3379 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:35:13.481553 kubelet[3379]: I0430 12:35:13.481536 3379 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:35:13.481952 kubelet[3379]: I0430 12:35:13.481924 3379 server.go:1264] "Started kubelet" Apr 30 12:35:13.483863 kubelet[3379]: I0430 12:35:13.483837 3379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:35:13.488478 kubelet[3379]: I0430 12:35:13.487959 3379 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:35:13.489016 kubelet[3379]: I0430 12:35:13.488981 3379 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:35:13.489867 kubelet[3379]: I0430 12:35:13.489807 3379 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:35:13.490032 kubelet[3379]: I0430 12:35:13.490004 3379 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:35:13.492218 kubelet[3379]: I0430 12:35:13.492194 3379 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:35:13.494168 kubelet[3379]: I0430 12:35:13.494144 3379 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:35:13.494309 kubelet[3379]: I0430 12:35:13.494292 3379 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:35:13.495874 kubelet[3379]: I0430 12:35:13.495804 3379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:35:13.497594 kubelet[3379]: I0430 12:35:13.497559 3379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:35:13.497669 kubelet[3379]: I0430 12:35:13.497607 3379 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:35:13.497669 kubelet[3379]: I0430 12:35:13.497623 3379 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:35:13.497712 kubelet[3379]: E0430 12:35:13.497666 3379 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:35:13.502461 kubelet[3379]: I0430 12:35:13.500270 3379 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:35:13.502461 kubelet[3379]: I0430 12:35:13.500378 3379 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:35:13.529815 kubelet[3379]: I0430 12:35:13.528402 3379 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:35:13.574446 kubelet[3379]: I0430 12:35:13.574407 3379 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:35:13.574865 kubelet[3379]: I0430 12:35:13.574836 3379 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:35:13.575007 kubelet[3379]: I0430 12:35:13.574977 3379 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:35:13.575395 kubelet[3379]: I0430 12:35:13.575369 3379 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:35:13.575542 kubelet[3379]: I0430 12:35:13.575499 3379 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:35:13.575605 kubelet[3379]: I0430 12:35:13.575597 3379 policy_none.go:49] "None policy: Start" Apr 30 12:35:13.576838 kubelet[3379]: I0430 12:35:13.576818 3379 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:35:13.576921 kubelet[3379]: I0430 12:35:13.576844 3379 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:35:13.577029 kubelet[3379]: I0430 12:35:13.577009 3379 state_mem.go:75] "Updated machine memory state" Apr 30 12:35:13.581278 kubelet[3379]: I0430 12:35:13.581247 3379 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:35:13.581547 kubelet[3379]: I0430 12:35:13.581444 3379 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:35:13.581547 kubelet[3379]: I0430 12:35:13.581541 3379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:35:13.596956 kubelet[3379]: I0430 12:35:13.596752 3379 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.598318 kubelet[3379]: I0430 12:35:13.598205 3379 topology_manager.go:215] "Topology Admit Handler" podUID="44001eb3d1ee727455ff9aebc450a56f" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.598660 kubelet[3379]: I0430 12:35:13.598548 3379 topology_manager.go:215] "Topology Admit Handler" podUID="f39dda14a5bacb5c2f45a158fe99569f" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.599097 kubelet[3379]: I0430 12:35:13.598878 3379 topology_manager.go:215] "Topology Admit Handler" podUID="19a9c8c5a40532303383093972e728be" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.615653 kubelet[3379]: I0430 12:35:13.615594 3379 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.615791 kubelet[3379]: I0430 12:35:13.615707 3379 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.617266 kubelet[3379]: W0430 12:35:13.617117 3379 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:35:13.622488 kubelet[3379]: W0430 12:35:13.622275 3379 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:35:13.623488 kubelet[3379]: W0430 12:35:13.623354 3379 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:35:13.633049 sudo[3411]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:35:13.633819 sudo[3411]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:35:13.695789 kubelet[3379]: I0430 12:35:13.695366 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a9c8c5a40532303383093972e728be-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-9a970e7770\" (UID: \"19a9c8c5a40532303383093972e728be\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.695789 kubelet[3379]: I0430 12:35:13.695416 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.695789 kubelet[3379]: I0430 12:35:13.695540 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.695789 kubelet[3379]: I0430 12:35:13.695559 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.695789 kubelet[3379]: I0430 12:35:13.695575 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f39dda14a5bacb5c2f45a158fe99569f-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-9a970e7770\" (UID: \"f39dda14a5bacb5c2f45a158fe99569f\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.696032 kubelet[3379]: I0430 12:35:13.695589 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a9c8c5a40532303383093972e728be-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-9a970e7770\" (UID: \"19a9c8c5a40532303383093972e728be\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.696032 kubelet[3379]: I0430 12:35:13.695604 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a9c8c5a40532303383093972e728be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-9a970e7770\" (UID: \"19a9c8c5a40532303383093972e728be\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.696032 kubelet[3379]: I0430 12:35:13.695617 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:13.696032 kubelet[3379]: I0430 12:35:13.695635 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44001eb3d1ee727455ff9aebc450a56f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-9a970e7770\" (UID: \"44001eb3d1ee727455ff9aebc450a56f\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:14.079881 sudo[3411]: pam_unix(sudo:session): session closed for user root Apr 30 12:35:14.480273 kubelet[3379]: I0430 12:35:14.480232 3379 apiserver.go:52] "Watching apiserver" Apr 30 12:35:14.494276 kubelet[3379]: I0430 12:35:14.494241 3379 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:35:14.572336 kubelet[3379]: W0430 12:35:14.571784 3379 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:35:14.572336 kubelet[3379]: E0430 12:35:14.571855 3379 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-9a970e7770\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" Apr 30 12:35:14.603934 kubelet[3379]: I0430 12:35:14.601987 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-a-9a970e7770" podStartSLOduration=1.601968847 podStartE2EDuration="1.601968847s" podCreationTimestamp="2025-04-30 12:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:35:14.586178878 +0000 UTC m=+1.407046417" watchObservedRunningTime="2025-04-30 12:35:14.601968847 +0000 UTC m=+1.422836426" Apr 30 12:35:14.615368 kubelet[3379]: I0430 12:35:14.615240 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-a-9a970e7770" podStartSLOduration=1.6152066550000002 podStartE2EDuration="1.615206655s" podCreationTimestamp="2025-04-30 12:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:35:14.602578487 +0000 UTC m=+1.423446106" watchObservedRunningTime="2025-04-30 12:35:14.615206655 +0000 UTC m=+1.436074194" Apr 30 12:35:14.637871 kubelet[3379]: I0430 12:35:14.637500 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-9a970e7770" podStartSLOduration=1.637480388 podStartE2EDuration="1.637480388s" podCreationTimestamp="2025-04-30 12:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:35:14.616097255 +0000 UTC m=+1.436964834" watchObservedRunningTime="2025-04-30 12:35:14.637480388 +0000 UTC m=+1.458347967" Apr 30 12:35:16.006076 sudo[2308]: pam_unix(sudo:session): session closed for user root Apr 30 12:35:16.076275 sshd[2307]: Connection closed by 10.200.16.10 port 58074 Apr 30 12:35:16.076865 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Apr 30 12:35:16.080015 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:58074.service: Deactivated successfully. Apr 30 12:35:16.084113 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:35:16.084366 systemd[1]: session-9.scope: Consumed 6.720s CPU time, 291.8M memory peak. Apr 30 12:35:16.086858 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:35:16.088321 systemd-logind[1709]: Removed session 9. Apr 30 12:35:27.630344 kubelet[3379]: I0430 12:35:27.630266 3379 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:35:27.631211 containerd[1740]: time="2025-04-30T12:35:27.631172724Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:35:27.631783 kubelet[3379]: I0430 12:35:27.631343 3379 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:35:28.186189 kubelet[3379]: I0430 12:35:28.186141 3379 topology_manager.go:215] "Topology Admit Handler" podUID="f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb" podNamespace="kube-system" podName="kube-proxy-6f2kf" Apr 30 12:35:28.200361 systemd[1]: Created slice kubepods-besteffort-podf3b5a485_dbc6_41e8_a5a2_a3f4fbb3d7cb.slice - libcontainer container kubepods-besteffort-podf3b5a485_dbc6_41e8_a5a2_a3f4fbb3d7cb.slice. Apr 30 12:35:28.203561 kubelet[3379]: I0430 12:35:28.203505 3379 topology_manager.go:215] "Topology Admit Handler" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" podNamespace="kube-system" podName="cilium-kcjch" Apr 30 12:35:28.219703 systemd[1]: Created slice kubepods-burstable-podefaa5877_6f1c_4369_bf3a_9c61e0e90fe7.slice - libcontainer container kubepods-burstable-podefaa5877_6f1c_4369_bf3a_9c61e0e90fe7.slice. Apr 30 12:35:28.283289 kubelet[3379]: I0430 12:35:28.283205 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-lib-modules\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.283289 kubelet[3379]: I0430 12:35:28.283253 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hubble-tls\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.283289 kubelet[3379]: I0430 12:35:28.283291 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cni-path\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.283591 kubelet[3379]: I0430 12:35:28.283311 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-config-path\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.283591 kubelet[3379]: I0430 12:35:28.283327 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-net\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.283591 kubelet[3379]: I0430 12:35:28.283342 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb-kube-proxy\") pod \"kube-proxy-6f2kf\" (UID: \"f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb\") " pod="kube-system/kube-proxy-6f2kf" Apr 30 12:35:28.283591 kubelet[3379]: I0430 12:35:28.283357 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb-lib-modules\") pod \"kube-proxy-6f2kf\" (UID: \"f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb\") " pod="kube-system/kube-proxy-6f2kf" Apr 30 12:35:28.283591 kubelet[3379]: I0430 12:35:28.283376 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-run\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.283591 kubelet[3379]: I0430 12:35:28.283402 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb-xtables-lock\") pod \"kube-proxy-6f2kf\" (UID: \"f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb\") " pod="kube-system/kube-proxy-6f2kf" Apr 30 12:35:28.284012 kubelet[3379]: I0430 12:35:28.283756 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-bpf-maps\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.284012 kubelet[3379]: I0430 12:35:28.283787 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hostproc\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.284012 kubelet[3379]: I0430 12:35:28.283835 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-etc-cni-netd\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.284012 kubelet[3379]: I0430 12:35:28.283851 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-xtables-lock\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.284012 kubelet[3379]: I0430 12:35:28.283866 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-kernel\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.284012 kubelet[3379]: I0430 12:35:28.283896 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6598\" (UniqueName: \"kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-kube-api-access-h6598\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.284176 kubelet[3379]: I0430 12:35:28.283917 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hsx\" (UniqueName: \"kubernetes.io/projected/f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb-kube-api-access-z8hsx\") pod \"kube-proxy-6f2kf\" (UID: \"f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb\") " pod="kube-system/kube-proxy-6f2kf" Apr 30 12:35:28.284176 kubelet[3379]: I0430 12:35:28.283934 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-clustermesh-secrets\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.284176 kubelet[3379]: I0430 12:35:28.283949 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-cgroup\") pod \"cilium-kcjch\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " pod="kube-system/cilium-kcjch" Apr 30 12:35:28.405996 kubelet[3379]: E0430 12:35:28.405951 3379 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 12:35:28.405996 kubelet[3379]: E0430 12:35:28.405990 3379 projected.go:200] Error preparing data for projected volume kube-api-access-z8hsx for pod kube-system/kube-proxy-6f2kf: configmap "kube-root-ca.crt" not found Apr 30 12:35:28.406149 kubelet[3379]: E0430 12:35:28.406057 3379 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb-kube-api-access-z8hsx podName:f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb nodeName:}" failed. No retries permitted until 2025-04-30 12:35:28.906032439 +0000 UTC m=+15.726900018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z8hsx" (UniqueName: "kubernetes.io/projected/f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb-kube-api-access-z8hsx") pod "kube-proxy-6f2kf" (UID: "f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb") : configmap "kube-root-ca.crt" not found Apr 30 12:35:28.407707 kubelet[3379]: E0430 12:35:28.407666 3379 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 12:35:28.407707 kubelet[3379]: E0430 12:35:28.407696 3379 projected.go:200] Error preparing data for projected volume kube-api-access-h6598 for pod kube-system/cilium-kcjch: configmap "kube-root-ca.crt" not found Apr 30 12:35:28.407875 kubelet[3379]: E0430 12:35:28.407829 3379 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-kube-api-access-h6598 podName:efaa5877-6f1c-4369-bf3a-9c61e0e90fe7 nodeName:}" failed. No retries permitted until 2025-04-30 12:35:28.9077596 +0000 UTC m=+15.728627139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h6598" (UniqueName: "kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-kube-api-access-h6598") pod "cilium-kcjch" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7") : configmap "kube-root-ca.crt" not found Apr 30 12:35:28.746836 kubelet[3379]: I0430 12:35:28.745921 3379 topology_manager.go:215] "Topology Admit Handler" podUID="2ba318d2-bf69-4a5d-ab60-b49dad24502f" podNamespace="kube-system" podName="cilium-operator-599987898-z82qr" Apr 30 12:35:28.753674 systemd[1]: Created slice kubepods-besteffort-pod2ba318d2_bf69_4a5d_ab60_b49dad24502f.slice - libcontainer container kubepods-besteffort-pod2ba318d2_bf69_4a5d_ab60_b49dad24502f.slice. Apr 30 12:35:28.786924 kubelet[3379]: I0430 12:35:28.786758 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qbd\" (UniqueName: \"kubernetes.io/projected/2ba318d2-bf69-4a5d-ab60-b49dad24502f-kube-api-access-h9qbd\") pod \"cilium-operator-599987898-z82qr\" (UID: \"2ba318d2-bf69-4a5d-ab60-b49dad24502f\") " pod="kube-system/cilium-operator-599987898-z82qr" Apr 30 12:35:28.786924 kubelet[3379]: I0430 12:35:28.786875 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ba318d2-bf69-4a5d-ab60-b49dad24502f-cilium-config-path\") pod \"cilium-operator-599987898-z82qr\" (UID: \"2ba318d2-bf69-4a5d-ab60-b49dad24502f\") " pod="kube-system/cilium-operator-599987898-z82qr" Apr 30 12:35:29.060008 containerd[1740]: time="2025-04-30T12:35:29.059895513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z82qr,Uid:2ba318d2-bf69-4a5d-ab60-b49dad24502f,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:29.111251 containerd[1740]: time="2025-04-30T12:35:29.110944467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:29.111251 containerd[1740]: time="2025-04-30T12:35:29.111003987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:29.111251 containerd[1740]: time="2025-04-30T12:35:29.111015187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:29.111251 containerd[1740]: time="2025-04-30T12:35:29.111091267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:29.115515 containerd[1740]: time="2025-04-30T12:35:29.115356390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6f2kf,Uid:f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:29.124531 containerd[1740]: time="2025-04-30T12:35:29.123693476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcjch,Uid:efaa5877-6f1c-4369-bf3a-9c61e0e90fe7,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:29.128637 systemd[1]: Started cri-containerd-140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0.scope - libcontainer container 140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0. Apr 30 12:35:29.160340 containerd[1740]: time="2025-04-30T12:35:29.160146420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z82qr,Uid:2ba318d2-bf69-4a5d-ab60-b49dad24502f,Namespace:kube-system,Attempt:0,} returns sandbox id \"140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0\"" Apr 30 12:35:29.172482 containerd[1740]: time="2025-04-30T12:35:29.168140145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:29.172482 containerd[1740]: time="2025-04-30T12:35:29.168195385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:29.172482 containerd[1740]: time="2025-04-30T12:35:29.168210665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:29.172482 containerd[1740]: time="2025-04-30T12:35:29.168287825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:29.173756 containerd[1740]: time="2025-04-30T12:35:29.173700229Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:35:29.196928 systemd[1]: Started cri-containerd-4c5e74d14b7bbffcf3d5cee8ac2698090bc9a4832d51be443d2e88c0cf658182.scope - libcontainer container 4c5e74d14b7bbffcf3d5cee8ac2698090bc9a4832d51be443d2e88c0cf658182. Apr 30 12:35:29.203129 containerd[1740]: time="2025-04-30T12:35:29.202547088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:29.203129 containerd[1740]: time="2025-04-30T12:35:29.202748168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:29.203129 containerd[1740]: time="2025-04-30T12:35:29.202765848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:29.203129 containerd[1740]: time="2025-04-30T12:35:29.202867688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:29.219850 systemd[1]: Started cri-containerd-5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d.scope - libcontainer container 5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d. Apr 30 12:35:29.229777 containerd[1740]: time="2025-04-30T12:35:29.229713306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6f2kf,Uid:f3b5a485-dbc6-41e8-a5a2-a3f4fbb3d7cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c5e74d14b7bbffcf3d5cee8ac2698090bc9a4832d51be443d2e88c0cf658182\"" Apr 30 12:35:29.233865 containerd[1740]: time="2025-04-30T12:35:29.233738269Z" level=info msg="CreateContainer within sandbox \"4c5e74d14b7bbffcf3d5cee8ac2698090bc9a4832d51be443d2e88c0cf658182\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:35:29.251041 containerd[1740]: time="2025-04-30T12:35:29.250839840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcjch,Uid:efaa5877-6f1c-4369-bf3a-9c61e0e90fe7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\"" Apr 30 12:35:29.291117 containerd[1740]: time="2025-04-30T12:35:29.291030547Z" level=info msg="CreateContainer within sandbox \"4c5e74d14b7bbffcf3d5cee8ac2698090bc9a4832d51be443d2e88c0cf658182\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e43fb48d0d682e6a8097cfc17e6d8687f1ac172bf2f465076b30bd5549388fa4\"" Apr 30 12:35:29.293249 containerd[1740]: time="2025-04-30T12:35:29.291842227Z" level=info msg="StartContainer for \"e43fb48d0d682e6a8097cfc17e6d8687f1ac172bf2f465076b30bd5549388fa4\"" Apr 30 12:35:29.317618 systemd[1]: Started cri-containerd-e43fb48d0d682e6a8097cfc17e6d8687f1ac172bf2f465076b30bd5549388fa4.scope - libcontainer container e43fb48d0d682e6a8097cfc17e6d8687f1ac172bf2f465076b30bd5549388fa4. Apr 30 12:35:29.348728 containerd[1740]: time="2025-04-30T12:35:29.348640105Z" level=info msg="StartContainer for \"e43fb48d0d682e6a8097cfc17e6d8687f1ac172bf2f465076b30bd5549388fa4\" returns successfully" Apr 30 12:35:30.557977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121175525.mount: Deactivated successfully. Apr 30 12:35:31.203050 containerd[1740]: time="2025-04-30T12:35:31.202994337Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:35:31.205558 containerd[1740]: time="2025-04-30T12:35:31.205344179Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 12:35:31.209057 containerd[1740]: time="2025-04-30T12:35:31.209024461Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:35:31.210804 containerd[1740]: time="2025-04-30T12:35:31.210665182Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.036912593s" Apr 30 12:35:31.210804 containerd[1740]: time="2025-04-30T12:35:31.210703342Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 12:35:31.212207 containerd[1740]: time="2025-04-30T12:35:31.212172103Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:35:31.214498 containerd[1740]: time="2025-04-30T12:35:31.214460865Z" level=info msg="CreateContainer within sandbox \"140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:35:31.254063 containerd[1740]: time="2025-04-30T12:35:31.253939891Z" level=info msg="CreateContainer within sandbox \"140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\"" Apr 30 12:35:31.254973 containerd[1740]: time="2025-04-30T12:35:31.254770771Z" level=info msg="StartContainer for \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\"" Apr 30 12:35:31.278666 systemd[1]: Started cri-containerd-5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8.scope - libcontainer container 5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8. Apr 30 12:35:31.309234 containerd[1740]: time="2025-04-30T12:35:31.309183208Z" level=info msg="StartContainer for \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\" returns successfully" Apr 30 12:35:31.609943 kubelet[3379]: I0430 12:35:31.609748 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-z82qr" podStartSLOduration=1.568492931 podStartE2EDuration="3.609731767s" podCreationTimestamp="2025-04-30 12:35:28 +0000 UTC" firstStartedPulling="2025-04-30 12:35:29.170537267 +0000 UTC m=+15.991404806" lastFinishedPulling="2025-04-30 12:35:31.211776063 +0000 UTC m=+18.032643642" observedRunningTime="2025-04-30 12:35:31.608599686 +0000 UTC m=+18.429467265" watchObservedRunningTime="2025-04-30 12:35:31.609731767 +0000 UTC m=+18.430599346" Apr 30 12:35:31.611069 kubelet[3379]: I0430 12:35:31.610902 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6f2kf" podStartSLOduration=3.610888488 podStartE2EDuration="3.610888488s" podCreationTimestamp="2025-04-30 12:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:35:29.606319236 +0000 UTC m=+16.427186775" watchObservedRunningTime="2025-04-30 12:35:31.610888488 +0000 UTC m=+18.431756067" Apr 30 12:35:36.285662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572783909.mount: Deactivated successfully. Apr 30 12:35:39.269958 containerd[1740]: time="2025-04-30T12:35:39.269895591Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:35:39.272931 containerd[1740]: time="2025-04-30T12:35:39.272735953Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 12:35:39.277346 containerd[1740]: time="2025-04-30T12:35:39.277286596Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:35:39.279045 containerd[1740]: time="2025-04-30T12:35:39.278917077Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.066705734s" Apr 30 12:35:39.279045 containerd[1740]: time="2025-04-30T12:35:39.278952037Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 12:35:39.281943 containerd[1740]: time="2025-04-30T12:35:39.281761718Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:35:39.314882 containerd[1740]: time="2025-04-30T12:35:39.314824858Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\"" Apr 30 12:35:39.316365 containerd[1740]: time="2025-04-30T12:35:39.315408699Z" level=info msg="StartContainer for \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\"" Apr 30 12:35:39.349638 systemd[1]: Started cri-containerd-8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7.scope - libcontainer container 8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7. Apr 30 12:35:39.383349 containerd[1740]: time="2025-04-30T12:35:39.383300619Z" level=info msg="StartContainer for \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\" returns successfully" Apr 30 12:35:39.391385 systemd[1]: cri-containerd-8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7.scope: Deactivated successfully. Apr 30 12:35:40.300995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7-rootfs.mount: Deactivated successfully. Apr 30 12:35:41.092344 containerd[1740]: time="2025-04-30T12:35:41.092259533Z" level=info msg="shim disconnected" id=8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7 namespace=k8s.io Apr 30 12:35:41.092344 containerd[1740]: time="2025-04-30T12:35:41.092337973Z" level=warning msg="cleaning up after shim disconnected" id=8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7 namespace=k8s.io Apr 30 12:35:41.092344 containerd[1740]: time="2025-04-30T12:35:41.092346694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:35:41.621018 containerd[1740]: time="2025-04-30T12:35:41.620835895Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:35:41.656112 containerd[1740]: time="2025-04-30T12:35:41.656020836Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\"" Apr 30 12:35:41.657200 containerd[1740]: time="2025-04-30T12:35:41.657142517Z" level=info msg="StartContainer for \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\"" Apr 30 12:35:41.689659 systemd[1]: Started cri-containerd-146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03.scope - libcontainer container 146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03. Apr 30 12:35:41.718451 containerd[1740]: time="2025-04-30T12:35:41.718189274Z" level=info msg="StartContainer for \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\" returns successfully" Apr 30 12:35:41.726104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:35:41.726333 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:35:41.726883 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:35:41.732947 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:35:41.733171 systemd[1]: cri-containerd-146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03.scope: Deactivated successfully. Apr 30 12:35:41.752464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:35:41.770354 containerd[1740]: time="2025-04-30T12:35:41.770281825Z" level=info msg="shim disconnected" id=146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03 namespace=k8s.io Apr 30 12:35:41.770354 containerd[1740]: time="2025-04-30T12:35:41.770342865Z" level=warning msg="cleaning up after shim disconnected" id=146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03 namespace=k8s.io Apr 30 12:35:41.770354 containerd[1740]: time="2025-04-30T12:35:41.770352145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:35:42.624751 containerd[1740]: time="2025-04-30T12:35:42.624707024Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:35:42.647267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03-rootfs.mount: Deactivated successfully. Apr 30 12:35:42.667599 containerd[1740]: time="2025-04-30T12:35:42.667543931Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\"" Apr 30 12:35:42.669743 containerd[1740]: time="2025-04-30T12:35:42.668517731Z" level=info msg="StartContainer for \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\"" Apr 30 12:35:42.699730 systemd[1]: Started cri-containerd-d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae.scope - libcontainer container d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae. Apr 30 12:35:42.729219 systemd[1]: cri-containerd-d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae.scope: Deactivated successfully. Apr 30 12:35:42.731700 containerd[1740]: time="2025-04-30T12:35:42.731648969Z" level=info msg="StartContainer for \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\" returns successfully" Apr 30 12:35:42.763845 containerd[1740]: time="2025-04-30T12:35:42.763700909Z" level=info msg="shim disconnected" id=d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae namespace=k8s.io Apr 30 12:35:42.763845 containerd[1740]: time="2025-04-30T12:35:42.763792429Z" level=warning msg="cleaning up after shim disconnected" id=d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae namespace=k8s.io Apr 30 12:35:42.763845 containerd[1740]: time="2025-04-30T12:35:42.763801269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:35:43.629094 containerd[1740]: time="2025-04-30T12:35:43.628997555Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:35:43.647210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae-rootfs.mount: Deactivated successfully. Apr 30 12:35:43.668642 containerd[1740]: time="2025-04-30T12:35:43.668594779Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\"" Apr 30 12:35:43.670241 containerd[1740]: time="2025-04-30T12:35:43.670183460Z" level=info msg="StartContainer for \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\"" Apr 30 12:35:43.703650 systemd[1]: Started cri-containerd-a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42.scope - libcontainer container a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42. Apr 30 12:35:43.727587 systemd[1]: cri-containerd-a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42.scope: Deactivated successfully. Apr 30 12:35:43.734570 containerd[1740]: time="2025-04-30T12:35:43.734442819Z" level=info msg="StartContainer for \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\" returns successfully" Apr 30 12:35:43.763129 containerd[1740]: time="2025-04-30T12:35:43.763066676Z" level=info msg="shim disconnected" id=a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42 namespace=k8s.io Apr 30 12:35:43.763129 containerd[1740]: time="2025-04-30T12:35:43.763120756Z" level=warning msg="cleaning up after shim disconnected" id=a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42 namespace=k8s.io Apr 30 12:35:43.763129 containerd[1740]: time="2025-04-30T12:35:43.763130116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:35:44.633971 containerd[1740]: time="2025-04-30T12:35:44.633286045Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:35:44.647087 systemd[1]: run-containerd-runc-k8s.io-a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42-runc.RSAfG3.mount: Deactivated successfully. Apr 30 12:35:44.647189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42-rootfs.mount: Deactivated successfully. Apr 30 12:35:44.679139 containerd[1740]: time="2025-04-30T12:35:44.679079993Z" level=info msg="CreateContainer within sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\"" Apr 30 12:35:44.679785 containerd[1740]: time="2025-04-30T12:35:44.679606873Z" level=info msg="StartContainer for \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\"" Apr 30 12:35:44.711633 systemd[1]: Started cri-containerd-003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e.scope - libcontainer container 003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e. Apr 30 12:35:44.741362 containerd[1740]: time="2025-04-30T12:35:44.741298790Z" level=info msg="StartContainer for \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\" returns successfully" Apr 30 12:35:44.887827 kubelet[3379]: I0430 12:35:44.887162 3379 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 12:35:44.926624 kubelet[3379]: I0430 12:35:44.926568 3379 topology_manager.go:215] "Topology Admit Handler" podUID="4e8894c3-eab6-49e2-a984-5ba710c87432" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qw6pw" Apr 30 12:35:44.930918 kubelet[3379]: I0430 12:35:44.930385 3379 topology_manager.go:215] "Topology Admit Handler" podUID="5ad12c01-42fd-4408-bd7f-b956b768f5da" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wwrld" Apr 30 12:35:44.939638 systemd[1]: Created slice kubepods-burstable-pod4e8894c3_eab6_49e2_a984_5ba710c87432.slice - libcontainer container kubepods-burstable-pod4e8894c3_eab6_49e2_a984_5ba710c87432.slice. Apr 30 12:35:44.949841 systemd[1]: Created slice kubepods-burstable-pod5ad12c01_42fd_4408_bd7f_b956b768f5da.slice - libcontainer container kubepods-burstable-pod5ad12c01_42fd_4408_bd7f_b956b768f5da.slice. Apr 30 12:35:45.086874 kubelet[3379]: I0430 12:35:45.086806 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ad12c01-42fd-4408-bd7f-b956b768f5da-config-volume\") pod \"coredns-7db6d8ff4d-wwrld\" (UID: \"5ad12c01-42fd-4408-bd7f-b956b768f5da\") " pod="kube-system/coredns-7db6d8ff4d-wwrld" Apr 30 12:35:45.086874 kubelet[3379]: I0430 12:35:45.086873 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx28h\" (UniqueName: \"kubernetes.io/projected/5ad12c01-42fd-4408-bd7f-b956b768f5da-kube-api-access-bx28h\") pod \"coredns-7db6d8ff4d-wwrld\" (UID: \"5ad12c01-42fd-4408-bd7f-b956b768f5da\") " pod="kube-system/coredns-7db6d8ff4d-wwrld" Apr 30 12:35:45.087127 kubelet[3379]: I0430 12:35:45.086903 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e8894c3-eab6-49e2-a984-5ba710c87432-config-volume\") pod \"coredns-7db6d8ff4d-qw6pw\" (UID: \"4e8894c3-eab6-49e2-a984-5ba710c87432\") " pod="kube-system/coredns-7db6d8ff4d-qw6pw" Apr 30 12:35:45.087127 kubelet[3379]: I0430 12:35:45.086925 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnfb\" (UniqueName: \"kubernetes.io/projected/4e8894c3-eab6-49e2-a984-5ba710c87432-kube-api-access-mqnfb\") pod \"coredns-7db6d8ff4d-qw6pw\" (UID: \"4e8894c3-eab6-49e2-a984-5ba710c87432\") " pod="kube-system/coredns-7db6d8ff4d-qw6pw" Apr 30 12:35:45.247002 containerd[1740]: time="2025-04-30T12:35:45.246956618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qw6pw,Uid:4e8894c3-eab6-49e2-a984-5ba710c87432,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:45.255041 containerd[1740]: time="2025-04-30T12:35:45.254739182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wwrld,Uid:5ad12c01-42fd-4408-bd7f-b956b768f5da,Namespace:kube-system,Attempt:0,}" Apr 30 12:35:45.661317 kubelet[3379]: I0430 12:35:45.660843 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kcjch" podStartSLOduration=7.633082153 podStartE2EDuration="17.660824429s" podCreationTimestamp="2025-04-30 12:35:28 +0000 UTC" firstStartedPulling="2025-04-30 12:35:29.252224041 +0000 UTC m=+16.073091620" lastFinishedPulling="2025-04-30 12:35:39.279966317 +0000 UTC m=+26.100833896" observedRunningTime="2025-04-30 12:35:45.658878308 +0000 UTC m=+32.479745887" watchObservedRunningTime="2025-04-30 12:35:45.660824429 +0000 UTC m=+32.481691968" Apr 30 12:35:47.073130 systemd-networkd[1556]: cilium_host: Link UP Apr 30 12:35:47.074664 systemd-networkd[1556]: cilium_net: Link UP Apr 30 12:35:47.076375 systemd-networkd[1556]: cilium_net: Gained carrier Apr 30 12:35:47.077207 systemd-networkd[1556]: cilium_host: Gained carrier Apr 30 12:35:47.077348 systemd-networkd[1556]: cilium_net: Gained IPv6LL Apr 30 12:35:47.078721 systemd-networkd[1556]: cilium_host: Gained IPv6LL Apr 30 12:35:47.255303 systemd-networkd[1556]: cilium_vxlan: Link UP Apr 30 12:35:47.255311 systemd-networkd[1556]: cilium_vxlan: Gained carrier Apr 30 12:35:47.578541 kernel: NET: Registered PF_ALG protocol family Apr 30 12:35:48.372369 systemd-networkd[1556]: lxc_health: Link UP Apr 30 12:35:48.384994 systemd-networkd[1556]: lxc_health: Gained carrier Apr 30 12:35:48.821122 systemd-networkd[1556]: lxcdf5840094ff2: Link UP Apr 30 12:35:48.831469 kernel: eth0: renamed from tmpaeb43 Apr 30 12:35:48.839605 systemd-networkd[1556]: lxcdf5840094ff2: Gained carrier Apr 30 12:35:48.869575 kernel: eth0: renamed from tmpae5b5 Apr 30 12:35:48.872813 systemd-networkd[1556]: lxc88f17639e1f9: Link UP Apr 30 12:35:48.873186 systemd-networkd[1556]: lxc88f17639e1f9: Gained carrier Apr 30 12:35:49.194629 systemd-networkd[1556]: cilium_vxlan: Gained IPv6LL Apr 30 12:35:50.026618 systemd-networkd[1556]: lxc88f17639e1f9: Gained IPv6LL Apr 30 12:35:50.154573 systemd-networkd[1556]: lxc_health: Gained IPv6LL Apr 30 12:35:50.410059 kubelet[3379]: I0430 12:35:50.408623 3379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 12:35:50.795594 systemd-networkd[1556]: lxcdf5840094ff2: Gained IPv6LL Apr 30 12:35:52.944120 containerd[1740]: time="2025-04-30T12:35:52.942938870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:52.947559 containerd[1740]: time="2025-04-30T12:35:52.944522951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:52.947559 containerd[1740]: time="2025-04-30T12:35:52.944562111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:52.947559 containerd[1740]: time="2025-04-30T12:35:52.944661951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:52.977008 containerd[1740]: time="2025-04-30T12:35:52.975963010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:35:52.977008 containerd[1740]: time="2025-04-30T12:35:52.976093770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:35:52.977897 containerd[1740]: time="2025-04-30T12:35:52.977674371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:52.977897 containerd[1740]: time="2025-04-30T12:35:52.977810651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:35:53.000689 systemd[1]: Started cri-containerd-aeb43b0badfba689c3e5c678ff0e04ffdcc15af2442696ea0424ad4d31d72e71.scope - libcontainer container aeb43b0badfba689c3e5c678ff0e04ffdcc15af2442696ea0424ad4d31d72e71. Apr 30 12:35:53.008058 systemd[1]: Started cri-containerd-ae5b5e76e3aa4406521baa3629a3f93e50af95bccfe37bcd31dfe313de3c64d4.scope - libcontainer container ae5b5e76e3aa4406521baa3629a3f93e50af95bccfe37bcd31dfe313de3c64d4. Apr 30 12:35:53.070330 containerd[1740]: time="2025-04-30T12:35:53.070276224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qw6pw,Uid:4e8894c3-eab6-49e2-a984-5ba710c87432,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeb43b0badfba689c3e5c678ff0e04ffdcc15af2442696ea0424ad4d31d72e71\"" Apr 30 12:35:53.078314 containerd[1740]: time="2025-04-30T12:35:53.078265189Z" level=info msg="CreateContainer within sandbox \"aeb43b0badfba689c3e5c678ff0e04ffdcc15af2442696ea0424ad4d31d72e71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:35:53.081067 containerd[1740]: time="2025-04-30T12:35:53.080923550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wwrld,Uid:5ad12c01-42fd-4408-bd7f-b956b768f5da,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae5b5e76e3aa4406521baa3629a3f93e50af95bccfe37bcd31dfe313de3c64d4\"" Apr 30 12:35:53.087303 containerd[1740]: time="2025-04-30T12:35:53.086864713Z" level=info msg="CreateContainer within sandbox \"ae5b5e76e3aa4406521baa3629a3f93e50af95bccfe37bcd31dfe313de3c64d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:35:53.128385 containerd[1740]: time="2025-04-30T12:35:53.128327297Z" level=info msg="CreateContainer within sandbox \"aeb43b0badfba689c3e5c678ff0e04ffdcc15af2442696ea0424ad4d31d72e71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a5baa1e0543742d02761582d256466ee33d7077f186b1fe42013e333dd74323\"" Apr 30 12:35:53.130265 containerd[1740]: time="2025-04-30T12:35:53.129165098Z" level=info msg="StartContainer for \"4a5baa1e0543742d02761582d256466ee33d7077f186b1fe42013e333dd74323\"" Apr 30 12:35:53.141123 containerd[1740]: time="2025-04-30T12:35:53.141068425Z" level=info msg="CreateContainer within sandbox \"ae5b5e76e3aa4406521baa3629a3f93e50af95bccfe37bcd31dfe313de3c64d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79cbfde4815d70ddecde70376064959f52fe952d78b7be6021f4ac4afed199c4\"" Apr 30 12:35:53.142896 containerd[1740]: time="2025-04-30T12:35:53.142832466Z" level=info msg="StartContainer for \"79cbfde4815d70ddecde70376064959f52fe952d78b7be6021f4ac4afed199c4\"" Apr 30 12:35:53.163676 systemd[1]: Started cri-containerd-4a5baa1e0543742d02761582d256466ee33d7077f186b1fe42013e333dd74323.scope - libcontainer container 4a5baa1e0543742d02761582d256466ee33d7077f186b1fe42013e333dd74323. Apr 30 12:35:53.187895 systemd[1]: Started cri-containerd-79cbfde4815d70ddecde70376064959f52fe952d78b7be6021f4ac4afed199c4.scope - libcontainer container 79cbfde4815d70ddecde70376064959f52fe952d78b7be6021f4ac4afed199c4. Apr 30 12:35:53.227031 containerd[1740]: time="2025-04-30T12:35:53.226070074Z" level=info msg="StartContainer for \"4a5baa1e0543742d02761582d256466ee33d7077f186b1fe42013e333dd74323\" returns successfully" Apr 30 12:35:53.227031 containerd[1740]: time="2025-04-30T12:35:53.226186874Z" level=info msg="StartContainer for \"79cbfde4815d70ddecde70376064959f52fe952d78b7be6021f4ac4afed199c4\" returns successfully" Apr 30 12:35:53.677561 kubelet[3379]: I0430 12:35:53.676693 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wwrld" podStartSLOduration=25.675412213 podStartE2EDuration="25.675412213s" podCreationTimestamp="2025-04-30 12:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:35:53.673186331 +0000 UTC m=+40.494053950" watchObservedRunningTime="2025-04-30 12:35:53.675412213 +0000 UTC m=+40.496279792" Apr 30 12:35:53.714842 kubelet[3379]: I0430 12:35:53.714484 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qw6pw" podStartSLOduration=25.714460915 podStartE2EDuration="25.714460915s" podCreationTimestamp="2025-04-30 12:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:35:53.713550075 +0000 UTC m=+40.534417734" watchObservedRunningTime="2025-04-30 12:35:53.714460915 +0000 UTC m=+40.535328494" Apr 30 12:38:50.105780 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:46562.service - OpenSSH per-connection server daemon (10.200.16.10:46562). Apr 30 12:38:50.589690 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 46562 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:38:50.591217 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:50.597155 systemd-logind[1709]: New session 10 of user core. Apr 30 12:38:50.603622 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:38:51.005892 sshd[4779]: Connection closed by 10.200.16.10 port 46562 Apr 30 12:38:51.005327 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:51.009069 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:46562.service: Deactivated successfully. Apr 30 12:38:51.011226 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:38:51.012778 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:38:51.013988 systemd-logind[1709]: Removed session 10. Apr 30 12:38:56.099699 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:46564.service - OpenSSH per-connection server daemon (10.200.16.10:46564). Apr 30 12:38:56.578018 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 46564 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:38:56.579262 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:56.585554 systemd-logind[1709]: New session 11 of user core. Apr 30 12:38:56.590592 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:38:56.982159 sshd[4797]: Connection closed by 10.200.16.10 port 46564 Apr 30 12:38:56.982069 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:56.985532 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:46564.service: Deactivated successfully. Apr 30 12:38:56.988229 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:38:56.991403 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:38:56.992661 systemd-logind[1709]: Removed session 11. Apr 30 12:39:02.070730 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:59036.service - OpenSSH per-connection server daemon (10.200.16.10:59036). Apr 30 12:39:02.519999 sshd[4812]: Accepted publickey for core from 10.200.16.10 port 59036 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:02.521343 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:02.525645 systemd-logind[1709]: New session 12 of user core. Apr 30 12:39:02.531658 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:39:02.909191 sshd[4814]: Connection closed by 10.200.16.10 port 59036 Apr 30 12:39:02.909996 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:02.913259 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:59036.service: Deactivated successfully. Apr 30 12:39:02.915673 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:39:02.917764 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:39:02.919328 systemd-logind[1709]: Removed session 12. Apr 30 12:39:07.999756 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:59040.service - OpenSSH per-connection server daemon (10.200.16.10:59040). Apr 30 12:39:08.487454 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 59040 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:08.489009 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:08.493808 systemd-logind[1709]: New session 13 of user core. Apr 30 12:39:08.505662 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:39:08.906110 sshd[4833]: Connection closed by 10.200.16.10 port 59040 Apr 30 12:39:08.906873 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:08.910827 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:59040.service: Deactivated successfully. Apr 30 12:39:08.913166 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:39:08.914161 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:39:08.915157 systemd-logind[1709]: Removed session 13. Apr 30 12:39:13.988619 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:39286.service - OpenSSH per-connection server daemon (10.200.16.10:39286). Apr 30 12:39:14.441695 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 39286 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:14.443088 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:14.447884 systemd-logind[1709]: New session 14 of user core. Apr 30 12:39:14.455634 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:39:14.830767 sshd[4850]: Connection closed by 10.200.16.10 port 39286 Apr 30 12:39:14.831767 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:14.835381 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:39:14.836008 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:39286.service: Deactivated successfully. Apr 30 12:39:14.838915 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:39:14.840145 systemd-logind[1709]: Removed session 14. Apr 30 12:39:14.916719 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:39296.service - OpenSSH per-connection server daemon (10.200.16.10:39296). Apr 30 12:39:15.372198 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 39296 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:15.373724 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:15.380493 systemd-logind[1709]: New session 15 of user core. Apr 30 12:39:15.385643 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:39:15.796154 sshd[4865]: Connection closed by 10.200.16.10 port 39296 Apr 30 12:39:15.797110 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:15.802035 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:39296.service: Deactivated successfully. Apr 30 12:39:15.806930 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:39:15.808106 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:39:15.809248 systemd-logind[1709]: Removed session 15. Apr 30 12:39:15.884224 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:39304.service - OpenSSH per-connection server daemon (10.200.16.10:39304). Apr 30 12:39:16.378001 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 39304 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:16.379391 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:16.383728 systemd-logind[1709]: New session 16 of user core. Apr 30 12:39:16.392636 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:39:16.803923 sshd[4877]: Connection closed by 10.200.16.10 port 39304 Apr 30 12:39:16.804470 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:16.809379 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:39304.service: Deactivated successfully. Apr 30 12:39:16.811901 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:39:16.812866 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:39:16.814372 systemd-logind[1709]: Removed session 16. Apr 30 12:39:21.890723 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:33224.service - OpenSSH per-connection server daemon (10.200.16.10:33224). Apr 30 12:39:22.349710 sshd[4889]: Accepted publickey for core from 10.200.16.10 port 33224 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:22.351136 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:22.356035 systemd-logind[1709]: New session 17 of user core. Apr 30 12:39:22.363659 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:39:22.742724 sshd[4891]: Connection closed by 10.200.16.10 port 33224 Apr 30 12:39:22.743363 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:22.746909 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:39:22.748098 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:33224.service: Deactivated successfully. Apr 30 12:39:22.751837 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:39:22.755213 systemd-logind[1709]: Removed session 17. Apr 30 12:39:22.836327 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:33236.service - OpenSSH per-connection server daemon (10.200.16.10:33236). Apr 30 12:39:23.317753 sshd[4903]: Accepted publickey for core from 10.200.16.10 port 33236 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:23.319247 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:23.323773 systemd-logind[1709]: New session 18 of user core. Apr 30 12:39:23.331658 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:39:23.784484 sshd[4905]: Connection closed by 10.200.16.10 port 33236 Apr 30 12:39:23.785069 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:23.788682 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:33236.service: Deactivated successfully. Apr 30 12:39:23.791195 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:39:23.792142 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:39:23.793026 systemd-logind[1709]: Removed session 18. Apr 30 12:39:23.881753 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:33242.service - OpenSSH per-connection server daemon (10.200.16.10:33242). Apr 30 12:39:24.361743 sshd[4915]: Accepted publickey for core from 10.200.16.10 port 33242 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:24.363269 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:24.367770 systemd-logind[1709]: New session 19 of user core. Apr 30 12:39:24.374884 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:39:26.173112 sshd[4917]: Connection closed by 10.200.16.10 port 33242 Apr 30 12:39:26.173879 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:26.178113 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:33242.service: Deactivated successfully. Apr 30 12:39:26.180902 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:39:26.181216 systemd[1]: session-19.scope: Consumed 463ms CPU time, 64.2M memory peak. Apr 30 12:39:26.182109 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:39:26.183365 systemd-logind[1709]: Removed session 19. Apr 30 12:39:26.265640 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:33248.service - OpenSSH per-connection server daemon (10.200.16.10:33248). Apr 30 12:39:26.753850 sshd[4934]: Accepted publickey for core from 10.200.16.10 port 33248 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:26.756040 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:26.760884 systemd-logind[1709]: New session 20 of user core. Apr 30 12:39:26.766617 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:39:27.279506 sshd[4936]: Connection closed by 10.200.16.10 port 33248 Apr 30 12:39:27.279906 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:27.283834 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:33248.service: Deactivated successfully. Apr 30 12:39:27.286010 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:39:27.287591 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:39:27.288912 systemd-logind[1709]: Removed session 20. Apr 30 12:39:27.373854 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:33252.service - OpenSSH per-connection server daemon (10.200.16.10:33252). Apr 30 12:39:27.827053 sshd[4946]: Accepted publickey for core from 10.200.16.10 port 33252 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:27.828596 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:27.833066 systemd-logind[1709]: New session 21 of user core. Apr 30 12:39:27.840639 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:39:28.213482 sshd[4948]: Connection closed by 10.200.16.10 port 33252 Apr 30 12:39:28.214041 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:28.216924 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:33252.service: Deactivated successfully. Apr 30 12:39:28.218848 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:39:28.220539 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:39:28.221830 systemd-logind[1709]: Removed session 21. Apr 30 12:39:33.296357 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:57114.service - OpenSSH per-connection server daemon (10.200.16.10:57114). Apr 30 12:39:33.752984 sshd[4965]: Accepted publickey for core from 10.200.16.10 port 57114 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:33.754772 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:33.759916 systemd-logind[1709]: New session 22 of user core. Apr 30 12:39:33.767622 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:39:34.150549 sshd[4967]: Connection closed by 10.200.16.10 port 57114 Apr 30 12:39:34.151238 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:34.154828 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:57114.service: Deactivated successfully. Apr 30 12:39:34.157283 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:39:34.158220 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:39:34.159334 systemd-logind[1709]: Removed session 22. Apr 30 12:39:39.241826 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:57044.service - OpenSSH per-connection server daemon (10.200.16.10:57044). Apr 30 12:39:39.727543 sshd[4978]: Accepted publickey for core from 10.200.16.10 port 57044 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:39.729181 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:39.733620 systemd-logind[1709]: New session 23 of user core. Apr 30 12:39:39.739717 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:39:40.135270 sshd[4980]: Connection closed by 10.200.16.10 port 57044 Apr 30 12:39:40.136017 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:40.139916 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:57044.service: Deactivated successfully. Apr 30 12:39:40.142188 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:39:40.143346 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:39:40.144294 systemd-logind[1709]: Removed session 23. Apr 30 12:39:45.226832 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:57046.service - OpenSSH per-connection server daemon (10.200.16.10:57046). Apr 30 12:39:45.708344 sshd[4992]: Accepted publickey for core from 10.200.16.10 port 57046 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:45.709830 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:45.715130 systemd-logind[1709]: New session 24 of user core. Apr 30 12:39:45.720635 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:39:46.112602 sshd[4994]: Connection closed by 10.200.16.10 port 57046 Apr 30 12:39:46.111940 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:46.115269 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:57046.service: Deactivated successfully. Apr 30 12:39:46.117119 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:39:46.119310 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:39:46.120339 systemd-logind[1709]: Removed session 24. Apr 30 12:39:46.201756 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:57060.service - OpenSSH per-connection server daemon (10.200.16.10:57060). Apr 30 12:39:46.681741 sshd[5006]: Accepted publickey for core from 10.200.16.10 port 57060 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:46.683208 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:46.687724 systemd-logind[1709]: New session 25 of user core. Apr 30 12:39:46.695590 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 12:39:48.835165 containerd[1740]: time="2025-04-30T12:39:48.835098640Z" level=info msg="StopContainer for \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\" with timeout 30 (s)" Apr 30 12:39:48.836531 containerd[1740]: time="2025-04-30T12:39:48.836226801Z" level=info msg="Stop container \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\" with signal terminated" Apr 30 12:39:48.846822 containerd[1740]: time="2025-04-30T12:39:48.846621807Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:39:48.852676 systemd[1]: cri-containerd-5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8.scope: Deactivated successfully. Apr 30 12:39:48.858590 containerd[1740]: time="2025-04-30T12:39:48.858409414Z" level=info msg="StopContainer for \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\" with timeout 2 (s)" Apr 30 12:39:48.858890 containerd[1740]: time="2025-04-30T12:39:48.858731654Z" level=info msg="Stop container \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\" with signal terminated" Apr 30 12:39:48.866535 systemd-networkd[1556]: lxc_health: Link DOWN Apr 30 12:39:48.866545 systemd-networkd[1556]: lxc_health: Lost carrier Apr 30 12:39:48.884950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8-rootfs.mount: Deactivated successfully. Apr 30 12:39:48.888002 systemd[1]: cri-containerd-003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e.scope: Deactivated successfully. Apr 30 12:39:48.889722 systemd[1]: cri-containerd-003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e.scope: Consumed 7.144s CPU time, 124.8M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:39:48.910292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e-rootfs.mount: Deactivated successfully. Apr 30 12:39:48.937309 containerd[1740]: time="2025-04-30T12:39:48.937192182Z" level=info msg="shim disconnected" id=003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e namespace=k8s.io Apr 30 12:39:48.937309 containerd[1740]: time="2025-04-30T12:39:48.937263742Z" level=warning msg="cleaning up after shim disconnected" id=003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e namespace=k8s.io Apr 30 12:39:48.937309 containerd[1740]: time="2025-04-30T12:39:48.937273062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:48.938122 containerd[1740]: time="2025-04-30T12:39:48.938081462Z" level=info msg="shim disconnected" id=5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8 namespace=k8s.io Apr 30 12:39:48.938345 containerd[1740]: time="2025-04-30T12:39:48.938326942Z" level=warning msg="cleaning up after shim disconnected" id=5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8 namespace=k8s.io Apr 30 12:39:48.938471 containerd[1740]: time="2025-04-30T12:39:48.938407182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:48.962622 containerd[1740]: time="2025-04-30T12:39:48.962500837Z" level=info msg="StopContainer for \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\" returns successfully" Apr 30 12:39:48.963404 containerd[1740]: time="2025-04-30T12:39:48.963371677Z" level=info msg="StopPodSandbox for \"140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0\"" Apr 30 12:39:48.963735 containerd[1740]: time="2025-04-30T12:39:48.963419677Z" level=info msg="Container to stop \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:48.964185 containerd[1740]: time="2025-04-30T12:39:48.964102278Z" level=info msg="StopContainer for \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\" returns successfully" Apr 30 12:39:48.965790 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0-shm.mount: Deactivated successfully. Apr 30 12:39:48.966202 containerd[1740]: time="2025-04-30T12:39:48.965977559Z" level=info msg="StopPodSandbox for \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\"" Apr 30 12:39:48.966202 containerd[1740]: time="2025-04-30T12:39:48.966021719Z" level=info msg="Container to stop \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:48.967159 containerd[1740]: time="2025-04-30T12:39:48.966031839Z" level=info msg="Container to stop \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:48.967159 containerd[1740]: time="2025-04-30T12:39:48.967103920Z" level=info msg="Container to stop \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:48.967159 containerd[1740]: time="2025-04-30T12:39:48.967121560Z" level=info msg="Container to stop \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:48.967159 containerd[1740]: time="2025-04-30T12:39:48.967132560Z" level=info msg="Container to stop \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:48.970113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d-shm.mount: Deactivated successfully. Apr 30 12:39:48.976522 systemd[1]: cri-containerd-140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0.scope: Deactivated successfully. Apr 30 12:39:48.977300 systemd[1]: cri-containerd-5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d.scope: Deactivated successfully. Apr 30 12:39:49.012233 containerd[1740]: time="2025-04-30T12:39:49.012151747Z" level=info msg="shim disconnected" id=5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d namespace=k8s.io Apr 30 12:39:49.012233 containerd[1740]: time="2025-04-30T12:39:49.012207947Z" level=info msg="shim disconnected" id=140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0 namespace=k8s.io Apr 30 12:39:49.012969 containerd[1740]: time="2025-04-30T12:39:49.012257067Z" level=warning msg="cleaning up after shim disconnected" id=140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0 namespace=k8s.io Apr 30 12:39:49.012969 containerd[1740]: time="2025-04-30T12:39:49.012266427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:49.013323 containerd[1740]: time="2025-04-30T12:39:49.012212507Z" level=warning msg="cleaning up after shim disconnected" id=5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d namespace=k8s.io Apr 30 12:39:49.013415 containerd[1740]: time="2025-04-30T12:39:49.013400667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:49.027593 containerd[1740]: time="2025-04-30T12:39:49.027535516Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:39:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:39:49.028648 containerd[1740]: time="2025-04-30T12:39:49.028615996Z" level=info msg="TearDown network for sandbox \"140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0\" successfully" Apr 30 12:39:49.028879 containerd[1740]: time="2025-04-30T12:39:49.028756637Z" level=info msg="StopPodSandbox for \"140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0\" returns successfully" Apr 30 12:39:49.032037 containerd[1740]: time="2025-04-30T12:39:49.032001798Z" level=info msg="TearDown network for sandbox \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" successfully" Apr 30 12:39:49.032282 containerd[1740]: time="2025-04-30T12:39:49.032179319Z" level=info msg="StopPodSandbox for \"5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d\" returns successfully" Apr 30 12:39:49.049507 kubelet[3379]: I0430 12:39:49.049476 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-clustermesh-secrets\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051530 kubelet[3379]: I0430 12:39:49.050526 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-kernel\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051530 kubelet[3379]: I0430 12:39:49.050565 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6598\" (UniqueName: \"kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-kube-api-access-h6598\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051530 kubelet[3379]: I0430 12:39:49.050607 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-lib-modules\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051530 kubelet[3379]: I0430 12:39:49.050628 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-config-path\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051530 kubelet[3379]: I0430 12:39:49.050646 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-bpf-maps\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051530 kubelet[3379]: I0430 12:39:49.050665 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hubble-tls\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051760 kubelet[3379]: I0430 12:39:49.050681 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-cgroup\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051760 kubelet[3379]: I0430 12:39:49.050699 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9qbd\" (UniqueName: \"kubernetes.io/projected/2ba318d2-bf69-4a5d-ab60-b49dad24502f-kube-api-access-h9qbd\") pod \"2ba318d2-bf69-4a5d-ab60-b49dad24502f\" (UID: \"2ba318d2-bf69-4a5d-ab60-b49dad24502f\") " Apr 30 12:39:49.051760 kubelet[3379]: I0430 12:39:49.050715 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cni-path\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051760 kubelet[3379]: I0430 12:39:49.050730 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ba318d2-bf69-4a5d-ab60-b49dad24502f-cilium-config-path\") pod \"2ba318d2-bf69-4a5d-ab60-b49dad24502f\" (UID: \"2ba318d2-bf69-4a5d-ab60-b49dad24502f\") " Apr 30 12:39:49.051760 kubelet[3379]: I0430 12:39:49.050746 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-run\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051760 kubelet[3379]: I0430 12:39:49.050760 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-etc-cni-netd\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051885 kubelet[3379]: I0430 12:39:49.050776 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hostproc\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051885 kubelet[3379]: I0430 12:39:49.050791 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-xtables-lock\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051885 kubelet[3379]: I0430 12:39:49.050805 3379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-net\") pod \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\" (UID: \"efaa5877-6f1c-4369-bf3a-9c61e0e90fe7\") " Apr 30 12:39:49.051885 kubelet[3379]: I0430 12:39:49.050880 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.054169 kubelet[3379]: I0430 12:39:49.053393 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.054169 kubelet[3379]: I0430 12:39:49.053627 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cni-path" (OuterVolumeSpecName: "cni-path") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.057629 kubelet[3379]: I0430 12:39:49.056920 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.057629 kubelet[3379]: I0430 12:39:49.056975 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.057629 kubelet[3379]: I0430 12:39:49.056995 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hostproc" (OuterVolumeSpecName: "hostproc") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.057629 kubelet[3379]: I0430 12:39:49.057012 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.057629 kubelet[3379]: I0430 12:39:49.057039 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.057853 kubelet[3379]: I0430 12:39:49.057054 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.061559 kubelet[3379]: I0430 12:39:49.061167 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:39:49.061559 kubelet[3379]: I0430 12:39:49.061307 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 12:39:49.065818 kubelet[3379]: I0430 12:39:49.065766 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba318d2-bf69-4a5d-ab60-b49dad24502f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2ba318d2-bf69-4a5d-ab60-b49dad24502f" (UID: "2ba318d2-bf69-4a5d-ab60-b49dad24502f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:39:49.065950 kubelet[3379]: I0430 12:39:49.065889 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba318d2-bf69-4a5d-ab60-b49dad24502f-kube-api-access-h9qbd" (OuterVolumeSpecName: "kube-api-access-h9qbd") pod "2ba318d2-bf69-4a5d-ab60-b49dad24502f" (UID: "2ba318d2-bf69-4a5d-ab60-b49dad24502f"). InnerVolumeSpecName "kube-api-access-h9qbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:39:49.065950 kubelet[3379]: I0430 12:39:49.065927 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:49.066411 kubelet[3379]: I0430 12:39:49.066395 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:39:49.066621 kubelet[3379]: I0430 12:39:49.066598 3379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-kube-api-access-h6598" (OuterVolumeSpecName: "kube-api-access-h6598") pod "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" (UID: "efaa5877-6f1c-4369-bf3a-9c61e0e90fe7"). InnerVolumeSpecName "kube-api-access-h6598". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:39:49.087412 kubelet[3379]: I0430 12:39:49.087310 3379 scope.go:117] "RemoveContainer" containerID="003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e" Apr 30 12:39:49.093523 containerd[1740]: time="2025-04-30T12:39:49.092900875Z" level=info msg="RemoveContainer for \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\"" Apr 30 12:39:49.095477 systemd[1]: Removed slice kubepods-burstable-podefaa5877_6f1c_4369_bf3a_9c61e0e90fe7.slice - libcontainer container kubepods-burstable-podefaa5877_6f1c_4369_bf3a_9c61e0e90fe7.slice. Apr 30 12:39:49.095642 systemd[1]: kubepods-burstable-podefaa5877_6f1c_4369_bf3a_9c61e0e90fe7.slice: Consumed 7.217s CPU time, 125.2M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:39:49.100630 systemd[1]: Removed slice kubepods-besteffort-pod2ba318d2_bf69_4a5d_ab60_b49dad24502f.slice - libcontainer container kubepods-besteffort-pod2ba318d2_bf69_4a5d_ab60_b49dad24502f.slice. Apr 30 12:39:49.108124 containerd[1740]: time="2025-04-30T12:39:49.108064924Z" level=info msg="RemoveContainer for \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\" returns successfully" Apr 30 12:39:49.108546 kubelet[3379]: I0430 12:39:49.108524 3379 scope.go:117] "RemoveContainer" containerID="a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42" Apr 30 12:39:49.109970 containerd[1740]: time="2025-04-30T12:39:49.109853565Z" level=info msg="RemoveContainer for \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\"" Apr 30 12:39:49.119682 containerd[1740]: time="2025-04-30T12:39:49.119637891Z" level=info msg="RemoveContainer for \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\" returns successfully" Apr 30 12:39:49.119963 kubelet[3379]: I0430 12:39:49.119930 3379 scope.go:117] "RemoveContainer" containerID="d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae" Apr 30 12:39:49.123486 containerd[1740]: time="2025-04-30T12:39:49.123214053Z" level=info msg="RemoveContainer for \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\"" Apr 30 12:39:49.134607 containerd[1740]: time="2025-04-30T12:39:49.134548060Z" level=info msg="RemoveContainer for \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\" returns successfully" Apr 30 12:39:49.135121 kubelet[3379]: I0430 12:39:49.135080 3379 scope.go:117] "RemoveContainer" containerID="146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03" Apr 30 12:39:49.138837 containerd[1740]: time="2025-04-30T12:39:49.138115902Z" level=info msg="RemoveContainer for \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\"" Apr 30 12:39:49.148129 containerd[1740]: time="2025-04-30T12:39:49.148080108Z" level=info msg="RemoveContainer for \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\" returns successfully" Apr 30 12:39:49.148658 kubelet[3379]: I0430 12:39:49.148330 3379 scope.go:117] "RemoveContainer" containerID="8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7" Apr 30 12:39:49.149453 containerd[1740]: time="2025-04-30T12:39:49.149381429Z" level=info msg="RemoveContainer for \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\"" Apr 30 12:39:49.150986 kubelet[3379]: I0430 12:39:49.150957 3379 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-bpf-maps\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.150986 kubelet[3379]: I0430 12:39:49.150979 3379 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-cgroup\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.150989 3379 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h9qbd\" (UniqueName: \"kubernetes.io/projected/2ba318d2-bf69-4a5d-ab60-b49dad24502f-kube-api-access-h9qbd\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.151000 3379 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hubble-tls\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.151010 3379 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cni-path\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.151018 3379 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ba318d2-bf69-4a5d-ab60-b49dad24502f-cilium-config-path\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.151026 3379 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-run\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.151034 3379 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-etc-cni-netd\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.151042 3379 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-hostproc\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151119 kubelet[3379]: I0430 12:39:49.151050 3379 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-xtables-lock\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151277 kubelet[3379]: I0430 12:39:49.151058 3379 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-net\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151277 kubelet[3379]: I0430 12:39:49.151066 3379 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-clustermesh-secrets\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151277 kubelet[3379]: I0430 12:39:49.151075 3379 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-host-proc-sys-kernel\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151277 kubelet[3379]: I0430 12:39:49.151083 3379 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h6598\" (UniqueName: \"kubernetes.io/projected/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-kube-api-access-h6598\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151277 kubelet[3379]: I0430 12:39:49.151091 3379 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-cilium-config-path\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.151277 kubelet[3379]: I0430 12:39:49.151102 3379 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7-lib-modules\") on node \"ci-4230.1.1-a-9a970e7770\" DevicePath \"\"" Apr 30 12:39:49.158322 containerd[1740]: time="2025-04-30T12:39:49.158245914Z" level=info msg="RemoveContainer for \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\" returns successfully" Apr 30 12:39:49.158557 kubelet[3379]: I0430 12:39:49.158527 3379 scope.go:117] "RemoveContainer" containerID="003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e" Apr 30 12:39:49.158918 containerd[1740]: time="2025-04-30T12:39:49.158811155Z" level=error msg="ContainerStatus for \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\": not found" Apr 30 12:39:49.159001 kubelet[3379]: E0430 12:39:49.158973 3379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\": not found" containerID="003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e" Apr 30 12:39:49.159098 kubelet[3379]: I0430 12:39:49.159015 3379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e"} err="failed to get container status \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\": rpc error: code = NotFound desc = an error occurred when try to find container \"003ad54bbbf2609ee7b6fffbc4d1437e9ccbc9e95274dcfb3da9e759cc37879e\": not found" Apr 30 12:39:49.159098 kubelet[3379]: I0430 12:39:49.159097 3379 scope.go:117] "RemoveContainer" containerID="a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42" Apr 30 12:39:49.159284 containerd[1740]: time="2025-04-30T12:39:49.159249515Z" level=error msg="ContainerStatus for \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\": not found" Apr 30 12:39:49.159394 kubelet[3379]: E0430 12:39:49.159367 3379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\": not found" containerID="a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42" Apr 30 12:39:49.159466 kubelet[3379]: I0430 12:39:49.159397 3379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42"} err="failed to get container status \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\": rpc error: code = NotFound desc = an error occurred when try to find container \"a793969b76117c4f8de9cd2f500dcaa6fe25c529596c4a5cd42b424817f27a42\": not found" Apr 30 12:39:49.159466 kubelet[3379]: I0430 12:39:49.159412 3379 scope.go:117] "RemoveContainer" containerID="d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae" Apr 30 12:39:49.159726 containerd[1740]: time="2025-04-30T12:39:49.159697435Z" level=error msg="ContainerStatus for \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\": not found" Apr 30 12:39:49.159959 kubelet[3379]: E0430 12:39:49.159840 3379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\": not found" containerID="d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae" Apr 30 12:39:49.159959 kubelet[3379]: I0430 12:39:49.159866 3379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae"} err="failed to get container status \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"d47ce88a7a8502a55063b04f7dcf1f308f88cd600b97248f224c888bb5b8b8ae\": not found" Apr 30 12:39:49.159959 kubelet[3379]: I0430 12:39:49.159881 3379 scope.go:117] "RemoveContainer" containerID="146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03" Apr 30 12:39:49.160063 containerd[1740]: time="2025-04-30T12:39:49.160034875Z" level=error msg="ContainerStatus for \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\": not found" Apr 30 12:39:49.160161 kubelet[3379]: E0430 12:39:49.160137 3379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\": not found" containerID="146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03" Apr 30 12:39:49.160208 kubelet[3379]: I0430 12:39:49.160162 3379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03"} err="failed to get container status \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\": rpc error: code = NotFound desc = an error occurred when try to find container \"146f9a02246a95630e8f3186736b482e878fed64621050556043e231c541bd03\": not found" Apr 30 12:39:49.160208 kubelet[3379]: I0430 12:39:49.160180 3379 scope.go:117] "RemoveContainer" containerID="8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7" Apr 30 12:39:49.160437 containerd[1740]: time="2025-04-30T12:39:49.160403196Z" level=error msg="ContainerStatus for \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\": not found" Apr 30 12:39:49.160604 kubelet[3379]: E0430 12:39:49.160581 3379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\": not found" containerID="8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7" Apr 30 12:39:49.160639 kubelet[3379]: I0430 12:39:49.160623 3379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7"} err="failed to get container status \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bab9f5ad94ca965751af6dbd5368413616b3112ba483a49c51da37c00ab6ee7\": not found" Apr 30 12:39:49.160665 kubelet[3379]: I0430 12:39:49.160640 3379 scope.go:117] "RemoveContainer" containerID="5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8" Apr 30 12:39:49.161941 containerd[1740]: time="2025-04-30T12:39:49.161906757Z" level=info msg="RemoveContainer for \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\"" Apr 30 12:39:49.170377 containerd[1740]: time="2025-04-30T12:39:49.170321842Z" level=info msg="RemoveContainer for \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\" returns successfully" Apr 30 12:39:49.171926 kubelet[3379]: I0430 12:39:49.171405 3379 scope.go:117] "RemoveContainer" containerID="5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8" Apr 30 12:39:49.172201 containerd[1740]: time="2025-04-30T12:39:49.172159003Z" level=error msg="ContainerStatus for \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\": not found" Apr 30 12:39:49.172602 kubelet[3379]: E0430 12:39:49.172562 3379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\": not found" containerID="5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8" Apr 30 12:39:49.172688 kubelet[3379]: I0430 12:39:49.172601 3379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8"} err="failed to get container status \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d0a63cb097ecb342406b98be0d2e1a0530b13dc699495f14d5bfba5c92acdf8\": not found" Apr 30 12:39:49.501077 kubelet[3379]: I0430 12:39:49.501036 3379 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba318d2-bf69-4a5d-ab60-b49dad24502f" path="/var/lib/kubelet/pods/2ba318d2-bf69-4a5d-ab60-b49dad24502f/volumes" Apr 30 12:39:49.501583 kubelet[3379]: I0430 12:39:49.501560 3379 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" path="/var/lib/kubelet/pods/efaa5877-6f1c-4369-bf3a-9c61e0e90fe7/volumes" Apr 30 12:39:49.824118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5259e601639f2e09084d7d1062a263c05903bcea9f110feedd64433e83bc6a5d-rootfs.mount: Deactivated successfully. Apr 30 12:39:49.824226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-140f9b866e62ac29ac385801f749e3efc4a6dd5de291e0d16dec7afe92cf78b0-rootfs.mount: Deactivated successfully. Apr 30 12:39:49.824281 systemd[1]: var-lib-kubelet-pods-efaa5877\x2d6f1c\x2d4369\x2dbf3a\x2d9c61e0e90fe7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh6598.mount: Deactivated successfully. Apr 30 12:39:49.824336 systemd[1]: var-lib-kubelet-pods-2ba318d2\x2dbf69\x2d4a5d\x2dab60\x2db49dad24502f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9qbd.mount: Deactivated successfully. Apr 30 12:39:49.824388 systemd[1]: var-lib-kubelet-pods-efaa5877\x2d6f1c\x2d4369\x2dbf3a\x2d9c61e0e90fe7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:39:49.824460 systemd[1]: var-lib-kubelet-pods-efaa5877\x2d6f1c\x2d4369\x2dbf3a\x2d9c61e0e90fe7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:39:50.815466 sshd[5008]: Connection closed by 10.200.16.10 port 57060 Apr 30 12:39:50.816255 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:50.819764 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:57060.service: Deactivated successfully. Apr 30 12:39:50.822355 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 12:39:50.822631 systemd[1]: session-25.scope: Consumed 1.215s CPU time, 23.6M memory peak. Apr 30 12:39:50.823887 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Apr 30 12:39:50.825389 systemd-logind[1709]: Removed session 25. Apr 30 12:39:50.905734 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:39936.service - OpenSSH per-connection server daemon (10.200.16.10:39936). Apr 30 12:39:51.389933 sshd[5170]: Accepted publickey for core from 10.200.16.10 port 39936 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:51.391329 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:51.395790 systemd-logind[1709]: New session 26 of user core. Apr 30 12:39:51.407865 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 12:39:52.365848 kubelet[3379]: I0430 12:39:52.365800 3379 topology_manager.go:215] "Topology Admit Handler" podUID="3e8d24c5-7ed4-4dd6-8cff-cba8f42db412" podNamespace="kube-system" podName="cilium-4sg7g" Apr 30 12:39:52.367324 kubelet[3379]: E0430 12:39:52.365862 3379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" containerName="clean-cilium-state" Apr 30 12:39:52.367324 kubelet[3379]: E0430 12:39:52.365873 3379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" containerName="cilium-agent" Apr 30 12:39:52.367324 kubelet[3379]: E0430 12:39:52.365880 3379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" containerName="apply-sysctl-overwrites" Apr 30 12:39:52.367324 kubelet[3379]: E0430 12:39:52.365886 3379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" containerName="mount-bpf-fs" Apr 30 12:39:52.367324 kubelet[3379]: E0430 12:39:52.365892 3379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ba318d2-bf69-4a5d-ab60-b49dad24502f" containerName="cilium-operator" Apr 30 12:39:52.367324 kubelet[3379]: E0430 12:39:52.365899 3379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" containerName="mount-cgroup" Apr 30 12:39:52.367324 kubelet[3379]: I0430 12:39:52.365922 3379 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba318d2-bf69-4a5d-ab60-b49dad24502f" containerName="cilium-operator" Apr 30 12:39:52.367324 kubelet[3379]: I0430 12:39:52.365928 3379 memory_manager.go:354] "RemoveStaleState removing state" podUID="efaa5877-6f1c-4369-bf3a-9c61e0e90fe7" containerName="cilium-agent" Apr 30 12:39:52.368882 kubelet[3379]: I0430 12:39:52.368146 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-cilium-cgroup\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.368882 kubelet[3379]: I0430 12:39:52.368185 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-cilium-ipsec-secrets\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.368882 kubelet[3379]: I0430 12:39:52.368202 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-host-proc-sys-net\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.368882 kubelet[3379]: I0430 12:39:52.368217 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-lib-modules\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.368882 kubelet[3379]: I0430 12:39:52.368231 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-xtables-lock\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.368882 kubelet[3379]: I0430 12:39:52.368246 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-cilium-config-path\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369058 kubelet[3379]: I0430 12:39:52.368259 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-cilium-run\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369058 kubelet[3379]: I0430 12:39:52.368277 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-clustermesh-secrets\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369058 kubelet[3379]: I0430 12:39:52.368292 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-cni-path\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369058 kubelet[3379]: I0430 12:39:52.368307 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-hubble-tls\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369058 kubelet[3379]: I0430 12:39:52.368323 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-host-proc-sys-kernel\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369058 kubelet[3379]: I0430 12:39:52.368338 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9dct\" (UniqueName: \"kubernetes.io/projected/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-kube-api-access-h9dct\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369239 kubelet[3379]: I0430 12:39:52.368354 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-bpf-maps\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369239 kubelet[3379]: I0430 12:39:52.368369 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-hostproc\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.369239 kubelet[3379]: I0430 12:39:52.368387 3379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e8d24c5-7ed4-4dd6-8cff-cba8f42db412-etc-cni-netd\") pod \"cilium-4sg7g\" (UID: \"3e8d24c5-7ed4-4dd6-8cff-cba8f42db412\") " pod="kube-system/cilium-4sg7g" Apr 30 12:39:52.379298 systemd[1]: Created slice kubepods-burstable-pod3e8d24c5_7ed4_4dd6_8cff_cba8f42db412.slice - libcontainer container kubepods-burstable-pod3e8d24c5_7ed4_4dd6_8cff_cba8f42db412.slice. Apr 30 12:39:52.423925 sshd[5172]: Connection closed by 10.200.16.10 port 39936 Apr 30 12:39:52.425689 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:52.430189 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Apr 30 12:39:52.432296 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:39936.service: Deactivated successfully. Apr 30 12:39:52.437746 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 12:39:52.439865 systemd-logind[1709]: Removed session 26. Apr 30 12:39:52.510819 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:39946.service - OpenSSH per-connection server daemon (10.200.16.10:39946). Apr 30 12:39:52.685625 containerd[1740]: time="2025-04-30T12:39:52.684621273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sg7g,Uid:3e8d24c5-7ed4-4dd6-8cff-cba8f42db412,Namespace:kube-system,Attempt:0,}" Apr 30 12:39:52.720670 containerd[1740]: time="2025-04-30T12:39:52.720521534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:39:52.720670 containerd[1740]: time="2025-04-30T12:39:52.720597374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:39:52.720670 containerd[1740]: time="2025-04-30T12:39:52.720613254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:39:52.721314 containerd[1740]: time="2025-04-30T12:39:52.720708574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:39:52.743673 systemd[1]: Started cri-containerd-ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf.scope - libcontainer container ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf. Apr 30 12:39:52.767410 containerd[1740]: time="2025-04-30T12:39:52.767364562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sg7g,Uid:3e8d24c5-7ed4-4dd6-8cff-cba8f42db412,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\"" Apr 30 12:39:52.771814 containerd[1740]: time="2025-04-30T12:39:52.771700325Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:39:52.806846 containerd[1740]: time="2025-04-30T12:39:52.806791666Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f\"" Apr 30 12:39:52.807654 containerd[1740]: time="2025-04-30T12:39:52.807595746Z" level=info msg="StartContainer for \"4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f\"" Apr 30 12:39:52.833778 systemd[1]: Started cri-containerd-4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f.scope - libcontainer container 4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f. Apr 30 12:39:52.863993 containerd[1740]: time="2025-04-30T12:39:52.863811380Z" level=info msg="StartContainer for \"4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f\" returns successfully" Apr 30 12:39:52.869231 systemd[1]: cri-containerd-4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f.scope: Deactivated successfully. Apr 30 12:39:52.940371 containerd[1740]: time="2025-04-30T12:39:52.940122346Z" level=info msg="shim disconnected" id=4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f namespace=k8s.io Apr 30 12:39:52.940371 containerd[1740]: time="2025-04-30T12:39:52.940186826Z" level=warning msg="cleaning up after shim disconnected" id=4b5b6b49a05e6890d23c7415800e44d2c7e6018c2615bda385e9e9d66ca0de3f namespace=k8s.io Apr 30 12:39:52.940371 containerd[1740]: time="2025-04-30T12:39:52.940194706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:52.961768 sshd[5187]: Accepted publickey for core from 10.200.16.10 port 39946 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:52.963104 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:52.967491 systemd-logind[1709]: New session 27 of user core. Apr 30 12:39:52.971629 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 12:39:53.109848 containerd[1740]: time="2025-04-30T12:39:53.109797288Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:39:53.141668 containerd[1740]: time="2025-04-30T12:39:53.141619507Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1\"" Apr 30 12:39:53.142344 containerd[1740]: time="2025-04-30T12:39:53.142316068Z" level=info msg="StartContainer for \"4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1\"" Apr 30 12:39:53.169745 systemd[1]: Started cri-containerd-4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1.scope - libcontainer container 4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1. Apr 30 12:39:53.207472 containerd[1740]: time="2025-04-30T12:39:53.207328787Z" level=info msg="StartContainer for \"4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1\" returns successfully" Apr 30 12:39:53.213479 systemd[1]: cri-containerd-4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1.scope: Deactivated successfully. Apr 30 12:39:53.247234 containerd[1740]: time="2025-04-30T12:39:53.247165051Z" level=info msg="shim disconnected" id=4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1 namespace=k8s.io Apr 30 12:39:53.247234 containerd[1740]: time="2025-04-30T12:39:53.247221491Z" level=warning msg="cleaning up after shim disconnected" id=4fba622eb415b8f717a323647117816d4845261695f5c8feace94b483d24caa1 namespace=k8s.io Apr 30 12:39:53.247234 containerd[1740]: time="2025-04-30T12:39:53.247229331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:53.285705 sshd[5289]: Connection closed by 10.200.16.10 port 39946 Apr 30 12:39:53.286503 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:53.290570 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:39946.service: Deactivated successfully. Apr 30 12:39:53.292807 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 12:39:53.294077 systemd-logind[1709]: Session 27 logged out. Waiting for processes to exit. Apr 30 12:39:53.295655 systemd-logind[1709]: Removed session 27. Apr 30 12:39:53.374756 systemd[1]: Started sshd@25-10.200.20.24:22-10.200.16.10:39958.service - OpenSSH per-connection server daemon (10.200.16.10:39958). Apr 30 12:39:53.658870 kubelet[3379]: E0430 12:39:53.658731 3379 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:39:53.831384 sshd[5358]: Accepted publickey for core from 10.200.16.10 port 39958 ssh2: RSA SHA256:an+obxm9dtIDaPrjI67eRUf5YSWV+lsrnJbn+IiLxak Apr 30 12:39:53.832768 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:53.837219 systemd-logind[1709]: New session 28 of user core. Apr 30 12:39:53.840626 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 12:39:54.113363 containerd[1740]: time="2025-04-30T12:39:54.113277451Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:39:54.146077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106900184.mount: Deactivated successfully. Apr 30 12:39:54.157164 containerd[1740]: time="2025-04-30T12:39:54.156987597Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869\"" Apr 30 12:39:54.159850 containerd[1740]: time="2025-04-30T12:39:54.158234398Z" level=info msg="StartContainer for \"e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869\"" Apr 30 12:39:54.206644 systemd[1]: Started cri-containerd-e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869.scope - libcontainer container e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869. Apr 30 12:39:54.234675 systemd[1]: cri-containerd-e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869.scope: Deactivated successfully. Apr 30 12:39:54.238557 containerd[1740]: time="2025-04-30T12:39:54.238513846Z" level=info msg="StartContainer for \"e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869\" returns successfully" Apr 30 12:39:54.273329 containerd[1740]: time="2025-04-30T12:39:54.273202307Z" level=info msg="shim disconnected" id=e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869 namespace=k8s.io Apr 30 12:39:54.273329 containerd[1740]: time="2025-04-30T12:39:54.273261787Z" level=warning msg="cleaning up after shim disconnected" id=e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869 namespace=k8s.io Apr 30 12:39:54.273329 containerd[1740]: time="2025-04-30T12:39:54.273270027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:54.477065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0476ccf0a27fb9667ae77019e496cd33d3c760da9081e4cea6f7c8556dce869-rootfs.mount: Deactivated successfully. Apr 30 12:39:55.117731 containerd[1740]: time="2025-04-30T12:39:55.117615454Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:39:55.152464 containerd[1740]: time="2025-04-30T12:39:55.152291675Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526\"" Apr 30 12:39:55.152956 containerd[1740]: time="2025-04-30T12:39:55.152923435Z" level=info msg="StartContainer for \"d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526\"" Apr 30 12:39:55.185658 systemd[1]: Started cri-containerd-d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526.scope - libcontainer container d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526. Apr 30 12:39:55.210087 systemd[1]: cri-containerd-d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526.scope: Deactivated successfully. Apr 30 12:39:55.215273 containerd[1740]: time="2025-04-30T12:39:55.215195193Z" level=info msg="StartContainer for \"d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526\" returns successfully" Apr 30 12:39:55.246214 containerd[1740]: time="2025-04-30T12:39:55.246146411Z" level=info msg="shim disconnected" id=d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526 namespace=k8s.io Apr 30 12:39:55.246696 containerd[1740]: time="2025-04-30T12:39:55.246361811Z" level=warning msg="cleaning up after shim disconnected" id=d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526 namespace=k8s.io Apr 30 12:39:55.246696 containerd[1740]: time="2025-04-30T12:39:55.246375011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:55.477027 systemd[1]: run-containerd-runc-k8s.io-d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526-runc.Vv7R6c.mount: Deactivated successfully. Apr 30 12:39:55.477125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d091374a0630066d779168b849c13ac5391519336c85ea69da2b661932045526-rootfs.mount: Deactivated successfully. Apr 30 12:39:56.121181 containerd[1740]: time="2025-04-30T12:39:56.120278656Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:39:56.159296 containerd[1740]: time="2025-04-30T12:39:56.159241600Z" level=info msg="CreateContainer within sandbox \"ae475d2895e7a45fcbd3a3a2c9f3a99b1bcc4ae64433477d9e6627ea66a7bcdf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bbccbc4656243db78c9de66fddc851ae33af3447cd90a7e791b663efaba117c4\"" Apr 30 12:39:56.160193 containerd[1740]: time="2025-04-30T12:39:56.160167400Z" level=info msg="StartContainer for \"bbccbc4656243db78c9de66fddc851ae33af3447cd90a7e791b663efaba117c4\"" Apr 30 12:39:56.191647 systemd[1]: Started cri-containerd-bbccbc4656243db78c9de66fddc851ae33af3447cd90a7e791b663efaba117c4.scope - libcontainer container bbccbc4656243db78c9de66fddc851ae33af3447cd90a7e791b663efaba117c4. Apr 30 12:39:56.229774 containerd[1740]: time="2025-04-30T12:39:56.229720162Z" level=info msg="StartContainer for \"bbccbc4656243db78c9de66fddc851ae33af3447cd90a7e791b663efaba117c4\" returns successfully" Apr 30 12:39:56.800496 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 12:39:58.362638 systemd[1]: run-containerd-runc-k8s.io-bbccbc4656243db78c9de66fddc851ae33af3447cd90a7e791b663efaba117c4-runc.4CBbOf.mount: Deactivated successfully. Apr 30 12:39:58.535389 kubelet[3379]: I0430 12:39:58.535338 3379 setters.go:580] "Node became not ready" node="ci-4230.1.1-a-9a970e7770" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:39:58Z","lastTransitionTime":"2025-04-30T12:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:39:59.634471 systemd-networkd[1556]: lxc_health: Link UP Apr 30 12:39:59.635933 systemd-networkd[1556]: lxc_health: Gained carrier Apr 30 12:40:00.514739 systemd[1]: run-containerd-runc-k8s.io-bbccbc4656243db78c9de66fddc851ae33af3447cd90a7e791b663efaba117c4-runc.e1PE0D.mount: Deactivated successfully. Apr 30 12:40:00.710606 kubelet[3379]: I0430 12:40:00.710535 3379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4sg7g" podStartSLOduration=8.710517395 podStartE2EDuration="8.710517395s" podCreationTimestamp="2025-04-30 12:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:39:57.140187187 +0000 UTC m=+283.961054806" watchObservedRunningTime="2025-04-30 12:40:00.710517395 +0000 UTC m=+287.531384934" Apr 30 12:40:01.612153 systemd-networkd[1556]: lxc_health: Gained IPv6LL Apr 30 12:40:04.934950 sshd[5360]: Connection closed by 10.200.16.10 port 39958 Apr 30 12:40:04.936684 sshd-session[5358]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.940382 systemd[1]: sshd@25-10.200.20.24:22-10.200.16.10:39958.service: Deactivated successfully. Apr 30 12:40:04.943194 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 12:40:04.945365 systemd-logind[1709]: Session 28 logged out. Waiting for processes to exit. Apr 30 12:40:04.946547 systemd-logind[1709]: Removed session 28.