May 7 23:44:00.318599 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 7 23:44:00.318622 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 7 22:21:35 -00 2025 May 7 23:44:00.318630 kernel: KASLR enabled May 7 23:44:00.318636 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 7 23:44:00.318643 kernel: printk: bootconsole [pl11] enabled May 7 23:44:00.318648 kernel: efi: EFI v2.7 by EDK II May 7 23:44:00.318655 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 7 23:44:00.318661 kernel: random: crng init done May 7 23:44:00.318666 kernel: secureboot: Secure boot disabled May 7 23:44:00.318672 kernel: ACPI: Early table checksum verification disabled May 7 23:44:00.318678 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 7 23:44:00.318684 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318689 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318697 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 7 23:44:00.318704 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318710 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318716 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318723 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318730 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318736 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318742 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 7 23:44:00.318748 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 7 23:44:00.318754 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 7 23:44:00.318760 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 7 23:44:00.318766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 7 23:44:00.318772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 7 23:44:00.318778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 7 23:44:00.318784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 7 23:44:00.318792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 7 23:44:00.318798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 7 23:44:00.318804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 7 23:44:00.318810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 7 23:44:00.318816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 7 23:44:00.318823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 7 23:44:00.318829 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 7 23:44:00.318834 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 7 23:44:00.318840 kernel: Zone ranges: May 7 23:44:00.318846 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 7 23:44:00.318852 kernel: DMA32 empty May 7 23:44:00.318858 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 7 23:44:00.318868 kernel: Movable zone start for each node May 7 23:44:00.318875 kernel: Early memory node ranges May 7 23:44:00.318881 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 7 23:44:00.318888 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 7 23:44:00.318894 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 7 23:44:00.318902 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 7 23:44:00.318909 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 7 23:44:00.318915 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 7 23:44:00.318921 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 7 23:44:00.318928 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 7 23:44:00.318934 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 7 23:44:00.318941 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 7 23:44:00.318947 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 7 23:44:00.318953 kernel: psci: probing for conduit method from ACPI. May 7 23:44:00.318960 kernel: psci: PSCIv1.1 detected in firmware. May 7 23:44:00.318966 kernel: psci: Using standard PSCI v0.2 function IDs May 7 23:44:00.318972 kernel: psci: MIGRATE_INFO_TYPE not supported. May 7 23:44:00.318980 kernel: psci: SMC Calling Convention v1.4 May 7 23:44:00.318987 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 7 23:44:00.318993 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 7 23:44:00.319000 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 7 23:44:00.319006 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 7 23:44:00.319013 kernel: pcpu-alloc: [0] 0 [0] 1 May 7 23:44:00.319019 kernel: Detected PIPT I-cache on CPU0 May 7 23:44:00.319026 kernel: CPU features: detected: GIC system register CPU interface May 7 23:44:00.319032 kernel: CPU features: detected: Hardware dirty bit management May 7 23:44:00.319039 kernel: CPU features: detected: Spectre-BHB May 7 23:44:00.319045 kernel: CPU features: kernel page table isolation forced ON by KASLR May 7 23:44:00.319053 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 7 23:44:00.319059 kernel: CPU features: detected: ARM erratum 1418040 May 7 23:44:00.319066 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 7 23:44:00.319072 kernel: CPU features: detected: SSBS not fully self-synchronizing May 7 23:44:00.319079 kernel: alternatives: applying boot alternatives May 7 23:44:00.319087 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:44:00.319093 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 7 23:44:00.319100 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 7 23:44:00.319107 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 7 23:44:00.319113 kernel: Fallback order for Node 0: 0 May 7 23:44:00.319119 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 7 23:44:00.319127 kernel: Policy zone: Normal May 7 23:44:00.319134 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 7 23:44:00.319140 kernel: software IO TLB: area num 2. May 7 23:44:00.319147 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) May 7 23:44:00.319153 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) May 7 23:44:00.319160 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 7 23:44:00.319167 kernel: rcu: Preemptible hierarchical RCU implementation. May 7 23:44:00.321228 kernel: rcu: RCU event tracing is enabled. May 7 23:44:00.321239 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 7 23:44:00.321246 kernel: Trampoline variant of Tasks RCU enabled. May 7 23:44:00.321253 kernel: Tracing variant of Tasks RCU enabled. May 7 23:44:00.321265 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 7 23:44:00.321273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 7 23:44:00.321280 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 7 23:44:00.321287 kernel: GICv3: 960 SPIs implemented May 7 23:44:00.321294 kernel: GICv3: 0 Extended SPIs implemented May 7 23:44:00.321301 kernel: Root IRQ handler: gic_handle_irq May 7 23:44:00.321308 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 7 23:44:00.321314 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 7 23:44:00.321321 kernel: ITS: No ITS available, not enabling LPIs May 7 23:44:00.321328 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 7 23:44:00.321335 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:44:00.321341 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 7 23:44:00.321350 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 7 23:44:00.321357 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 7 23:44:00.321364 kernel: Console: colour dummy device 80x25 May 7 23:44:00.321371 kernel: printk: console [tty1] enabled May 7 23:44:00.321378 kernel: ACPI: Core revision 20230628 May 7 23:44:00.321384 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 7 23:44:00.321391 kernel: pid_max: default: 32768 minimum: 301 May 7 23:44:00.321398 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 7 23:44:00.321405 kernel: landlock: Up and running. May 7 23:44:00.321413 kernel: SELinux: Initializing. May 7 23:44:00.321420 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:44:00.321427 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:44:00.321434 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 7 23:44:00.321441 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 7 23:44:00.321448 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 7 23:44:00.321455 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 7 23:44:00.321469 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 7 23:44:00.321476 kernel: rcu: Hierarchical SRCU implementation. May 7 23:44:00.321483 kernel: rcu: Max phase no-delay instances is 400. May 7 23:44:00.321490 kernel: Remapping and enabling EFI services. May 7 23:44:00.321497 kernel: smp: Bringing up secondary CPUs ... May 7 23:44:00.321506 kernel: Detected PIPT I-cache on CPU1 May 7 23:44:00.321513 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 7 23:44:00.321521 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:44:00.321528 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 7 23:44:00.321535 kernel: smp: Brought up 1 node, 2 CPUs May 7 23:44:00.321544 kernel: SMP: Total of 2 processors activated. May 7 23:44:00.321551 kernel: CPU features: detected: 32-bit EL0 Support May 7 23:44:00.321558 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 7 23:44:00.321565 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 7 23:44:00.321572 kernel: CPU features: detected: CRC32 instructions May 7 23:44:00.321579 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 7 23:44:00.321586 kernel: CPU features: detected: LSE atomic instructions May 7 23:44:00.321593 kernel: CPU features: detected: Privileged Access Never May 7 23:44:00.321600 kernel: CPU: All CPU(s) started at EL1 May 7 23:44:00.321609 kernel: alternatives: applying system-wide alternatives May 7 23:44:00.321616 kernel: devtmpfs: initialized May 7 23:44:00.321623 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 7 23:44:00.321631 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 7 23:44:00.321638 kernel: pinctrl core: initialized pinctrl subsystem May 7 23:44:00.321645 kernel: SMBIOS 3.1.0 present. May 7 23:44:00.321652 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 7 23:44:00.321659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 7 23:44:00.321666 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 7 23:44:00.321675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 7 23:44:00.321682 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 7 23:44:00.321690 kernel: audit: initializing netlink subsys (disabled) May 7 23:44:00.321697 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 7 23:44:00.321704 kernel: thermal_sys: Registered thermal governor 'step_wise' May 7 23:44:00.321711 kernel: cpuidle: using governor menu May 7 23:44:00.321718 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 7 23:44:00.321725 kernel: ASID allocator initialised with 32768 entries May 7 23:44:00.321732 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 7 23:44:00.321741 kernel: Serial: AMBA PL011 UART driver May 7 23:44:00.321748 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 7 23:44:00.321755 kernel: Modules: 0 pages in range for non-PLT usage May 7 23:44:00.321763 kernel: Modules: 509264 pages in range for PLT usage May 7 23:44:00.321770 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 7 23:44:00.321777 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 7 23:44:00.321784 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 7 23:44:00.321791 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 7 23:44:00.321798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 7 23:44:00.321807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 7 23:44:00.321815 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 7 23:44:00.321822 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 7 23:44:00.321829 kernel: ACPI: Added _OSI(Module Device) May 7 23:44:00.321836 kernel: ACPI: Added _OSI(Processor Device) May 7 23:44:00.321843 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 7 23:44:00.321850 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 7 23:44:00.321857 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 7 23:44:00.321864 kernel: ACPI: Interpreter enabled May 7 23:44:00.321872 kernel: ACPI: Using GIC for interrupt routing May 7 23:44:00.321879 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 7 23:44:00.321886 kernel: printk: console [ttyAMA0] enabled May 7 23:44:00.321893 kernel: printk: bootconsole [pl11] disabled May 7 23:44:00.321901 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 7 23:44:00.321908 kernel: iommu: Default domain type: Translated May 7 23:44:00.321915 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 7 23:44:00.321922 kernel: efivars: Registered efivars operations May 7 23:44:00.321929 kernel: vgaarb: loaded May 7 23:44:00.321938 kernel: clocksource: Switched to clocksource arch_sys_counter May 7 23:44:00.321945 kernel: VFS: Disk quotas dquot_6.6.0 May 7 23:44:00.321952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 7 23:44:00.321959 kernel: pnp: PnP ACPI init May 7 23:44:00.321966 kernel: pnp: PnP ACPI: found 0 devices May 7 23:44:00.321973 kernel: NET: Registered PF_INET protocol family May 7 23:44:00.321980 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 7 23:44:00.321988 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 7 23:44:00.321995 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 7 23:44:00.322004 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 7 23:44:00.322011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 7 23:44:00.322018 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 7 23:44:00.322025 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:44:00.322033 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:44:00.322040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 7 23:44:00.322047 kernel: PCI: CLS 0 bytes, default 64 May 7 23:44:00.322054 kernel: kvm [1]: HYP mode not available May 7 23:44:00.322061 kernel: Initialise system trusted keyrings May 7 23:44:00.322070 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 7 23:44:00.322077 kernel: Key type asymmetric registered May 7 23:44:00.322084 kernel: Asymmetric key parser 'x509' registered May 7 23:44:00.322091 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 7 23:44:00.322099 kernel: io scheduler mq-deadline registered May 7 23:44:00.322106 kernel: io scheduler kyber registered May 7 23:44:00.322113 kernel: io scheduler bfq registered May 7 23:44:00.322120 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 7 23:44:00.322127 kernel: thunder_xcv, ver 1.0 May 7 23:44:00.322136 kernel: thunder_bgx, ver 1.0 May 7 23:44:00.322143 kernel: nicpf, ver 1.0 May 7 23:44:00.322150 kernel: nicvf, ver 1.0 May 7 23:44:00.322340 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 7 23:44:00.322416 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-07T23:43:59 UTC (1746661439) May 7 23:44:00.322425 kernel: efifb: probing for efifb May 7 23:44:00.322433 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 7 23:44:00.322440 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 7 23:44:00.322451 kernel: efifb: scrolling: redraw May 7 23:44:00.322458 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 7 23:44:00.322465 kernel: Console: switching to colour frame buffer device 128x48 May 7 23:44:00.322472 kernel: fb0: EFI VGA frame buffer device May 7 23:44:00.322479 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 7 23:44:00.322487 kernel: hid: raw HID events driver (C) Jiri Kosina May 7 23:44:00.322493 kernel: No ACPI PMU IRQ for CPU0 May 7 23:44:00.322500 kernel: No ACPI PMU IRQ for CPU1 May 7 23:44:00.322507 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 7 23:44:00.322517 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 7 23:44:00.322524 kernel: watchdog: Hard watchdog permanently disabled May 7 23:44:00.322531 kernel: NET: Registered PF_INET6 protocol family May 7 23:44:00.322538 kernel: Segment Routing with IPv6 May 7 23:44:00.322545 kernel: In-situ OAM (IOAM) with IPv6 May 7 23:44:00.322552 kernel: NET: Registered PF_PACKET protocol family May 7 23:44:00.322559 kernel: Key type dns_resolver registered May 7 23:44:00.322566 kernel: registered taskstats version 1 May 7 23:44:00.322573 kernel: Loading compiled-in X.509 certificates May 7 23:44:00.322581 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: f45666b1b2057b901dda15e57012558a26abdeb0' May 7 23:44:00.322589 kernel: Key type .fscrypt registered May 7 23:44:00.322595 kernel: Key type fscrypt-provisioning registered May 7 23:44:00.322603 kernel: ima: No TPM chip found, activating TPM-bypass! May 7 23:44:00.322610 kernel: ima: Allocated hash algorithm: sha1 May 7 23:44:00.322617 kernel: ima: No architecture policies found May 7 23:44:00.322624 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 7 23:44:00.322631 kernel: clk: Disabling unused clocks May 7 23:44:00.322638 kernel: Freeing unused kernel memory: 38336K May 7 23:44:00.322647 kernel: Run /init as init process May 7 23:44:00.322654 kernel: with arguments: May 7 23:44:00.322661 kernel: /init May 7 23:44:00.322668 kernel: with environment: May 7 23:44:00.322675 kernel: HOME=/ May 7 23:44:00.322682 kernel: TERM=linux May 7 23:44:00.322689 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 7 23:44:00.322697 systemd[1]: Successfully made /usr/ read-only. May 7 23:44:00.322708 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:44:00.322717 systemd[1]: Detected virtualization microsoft. May 7 23:44:00.322724 systemd[1]: Detected architecture arm64. May 7 23:44:00.322731 systemd[1]: Running in initrd. May 7 23:44:00.322739 systemd[1]: No hostname configured, using default hostname. May 7 23:44:00.322747 systemd[1]: Hostname set to . May 7 23:44:00.322755 systemd[1]: Initializing machine ID from random generator. May 7 23:44:00.322762 systemd[1]: Queued start job for default target initrd.target. May 7 23:44:00.322772 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:44:00.322780 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:44:00.322788 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 7 23:44:00.322796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:44:00.322803 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 7 23:44:00.322812 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 7 23:44:00.322821 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 7 23:44:00.322830 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 7 23:44:00.322838 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:44:00.322846 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:44:00.322853 systemd[1]: Reached target paths.target - Path Units. May 7 23:44:00.322861 systemd[1]: Reached target slices.target - Slice Units. May 7 23:44:00.322869 systemd[1]: Reached target swap.target - Swaps. May 7 23:44:00.322876 systemd[1]: Reached target timers.target - Timer Units. May 7 23:44:00.322884 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:44:00.322893 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:44:00.322901 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 7 23:44:00.322909 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 7 23:44:00.322917 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:44:00.322925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:44:00.322932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:44:00.322940 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:44:00.322947 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 7 23:44:00.322955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:44:00.322964 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 7 23:44:00.322972 systemd[1]: Starting systemd-fsck-usr.service... May 7 23:44:00.322980 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:44:00.323007 systemd-journald[218]: Collecting audit messages is disabled. May 7 23:44:00.323028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:44:00.323037 systemd-journald[218]: Journal started May 7 23:44:00.323056 systemd-journald[218]: Runtime Journal (/run/log/journal/c1bdd27b2aee446aabf9a09c332ec63f) is 8M, max 78.5M, 70.5M free. May 7 23:44:00.331665 systemd-modules-load[220]: Inserted module 'overlay' May 7 23:44:00.337701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:44:00.359141 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:44:00.362430 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 7 23:44:00.388273 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 7 23:44:00.388298 kernel: Bridge firewalling registered May 7 23:44:00.383896 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:44:00.387050 systemd-modules-load[220]: Inserted module 'br_netfilter' May 7 23:44:00.394631 systemd[1]: Finished systemd-fsck-usr.service. May 7 23:44:00.408620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:44:00.419391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:00.445436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:44:00.459747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:44:00.474375 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 7 23:44:00.494815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:44:00.503813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:44:00.518549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:00.533772 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:44:00.545553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:44:00.575550 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 7 23:44:00.595241 dracut-cmdline[250]: dracut-dracut-053 May 7 23:44:00.602597 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:44:00.597381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:44:00.639454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:44:00.654626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:44:00.692590 systemd-resolved[256]: Positive Trust Anchors: May 7 23:44:00.692606 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:44:00.692637 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:44:00.699974 systemd-resolved[256]: Defaulting to hostname 'linux'. May 7 23:44:00.708706 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:44:00.743593 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:44:00.786190 kernel: SCSI subsystem initialized May 7 23:44:00.793187 kernel: Loading iSCSI transport class v2.0-870. May 7 23:44:00.806227 kernel: iscsi: registered transport (tcp) May 7 23:44:00.823755 kernel: iscsi: registered transport (qla4xxx) May 7 23:44:00.823778 kernel: QLogic iSCSI HBA Driver May 7 23:44:00.863459 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 7 23:44:00.880411 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 7 23:44:00.915194 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 7 23:44:00.915260 kernel: device-mapper: uevent: version 1.0.3 May 7 23:44:00.921404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 7 23:44:00.971206 kernel: raid6: neonx8 gen() 15747 MB/s May 7 23:44:00.993189 kernel: raid6: neonx4 gen() 15788 MB/s May 7 23:44:01.013186 kernel: raid6: neonx2 gen() 13215 MB/s May 7 23:44:01.037186 kernel: raid6: neonx1 gen() 10510 MB/s May 7 23:44:01.057183 kernel: raid6: int64x8 gen() 6786 MB/s May 7 23:44:01.077181 kernel: raid6: int64x4 gen() 7352 MB/s May 7 23:44:01.104187 kernel: raid6: int64x2 gen() 6111 MB/s May 7 23:44:01.127941 kernel: raid6: int64x1 gen() 5058 MB/s May 7 23:44:01.127956 kernel: raid6: using algorithm neonx4 gen() 15788 MB/s May 7 23:44:01.154384 kernel: raid6: .... xor() 12424 MB/s, rmw enabled May 7 23:44:01.154399 kernel: raid6: using neon recovery algorithm May 7 23:44:01.162183 kernel: xor: measuring software checksum speed May 7 23:44:01.168683 kernel: 8regs : 20152 MB/sec May 7 23:44:01.168694 kernel: 32regs : 21670 MB/sec May 7 23:44:01.172087 kernel: arm64_neon : 27785 MB/sec May 7 23:44:01.175920 kernel: xor: using function: arm64_neon (27785 MB/sec) May 7 23:44:01.226203 kernel: Btrfs loaded, zoned=no, fsverity=no May 7 23:44:01.237528 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 7 23:44:01.264453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:44:01.293301 systemd-udevd[437]: Using default interface naming scheme 'v255'. May 7 23:44:01.299854 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:44:01.322860 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 7 23:44:01.347442 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation May 7 23:44:01.376213 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:44:01.392449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:44:01.443521 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:44:01.463555 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 7 23:44:01.492089 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 7 23:44:01.506602 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:44:01.523035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:44:01.530973 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:44:01.557419 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 7 23:44:01.588286 kernel: hv_vmbus: Vmbus version:5.3 May 7 23:44:01.597964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 7 23:44:01.680214 kernel: hv_vmbus: registering driver hv_storvsc May 7 23:44:01.680238 kernel: hv_vmbus: registering driver hyperv_keyboard May 7 23:44:01.680248 kernel: hv_vmbus: registering driver hid_hyperv May 7 23:44:01.680266 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 7 23:44:01.680276 kernel: scsi host0: storvsc_host_t May 7 23:44:01.680439 kernel: pps_core: LinuxPPS API ver. 1 registered May 7 23:44:01.680449 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 7 23:44:01.680470 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 7 23:44:01.680479 kernel: scsi host1: storvsc_host_t May 7 23:44:01.680569 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 7 23:44:01.680656 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 7 23:44:01.629054 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:44:01.697456 kernel: hv_vmbus: registering driver hv_netvsc May 7 23:44:01.697480 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 7 23:44:01.629276 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:01.713211 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:44:01.729283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:44:01.756827 kernel: PTP clock support registered May 7 23:44:01.729495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:01.749321 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:44:01.778549 kernel: hv_utils: Registering HyperV Utility Driver May 7 23:44:01.778579 kernel: hv_vmbus: registering driver hv_utils May 7 23:44:01.803091 kernel: hv_utils: Heartbeat IC version 3.0 May 7 23:44:01.803146 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 7 23:44:02.053736 kernel: hv_utils: Shutdown IC version 3.2 May 7 23:44:02.053759 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 7 23:44:02.053769 kernel: hv_utils: TimeSync IC version 4.0 May 7 23:44:02.053778 kernel: hv_netvsc 002248b7-303a-0022-48b7-303a002248b7 eth0: VF slot 1 added May 7 23:44:02.053902 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 7 23:44:02.043855 systemd-resolved[256]: Clock change detected. Flushing caches. May 7 23:44:02.044457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:44:02.074687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:02.110557 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 7 23:44:02.147547 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 7 23:44:02.147668 kernel: hv_vmbus: registering driver hv_pci May 7 23:44:02.147680 kernel: sd 0:0:0:0: [sda] Write Protect is off May 7 23:44:02.147763 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 7 23:44:02.147846 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 7 23:44:02.147928 kernel: hv_pci a29e0770-ee5b-460f-9025-6bd64f4e5657: PCI VMBus probing: Using version 0x10004 May 7 23:44:02.227011 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 7 23:44:02.227052 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 7 23:44:02.227194 kernel: hv_pci a29e0770-ee5b-460f-9025-6bd64f4e5657: PCI host bridge to bus ee5b:00 May 7 23:44:02.227283 kernel: pci_bus ee5b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 7 23:44:02.227380 kernel: pci_bus ee5b:00: No busn resource found for root bus, will use [bus 00-ff] May 7 23:44:02.227459 kernel: pci ee5b:00:02.0: [15b3:1018] type 00 class 0x020000 May 7 23:44:02.227554 kernel: pci ee5b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 7 23:44:02.227636 kernel: pci ee5b:00:02.0: enabling Extended Tags May 7 23:44:02.227717 kernel: pci ee5b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ee5b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 7 23:44:02.227798 kernel: pci_bus ee5b:00: busn_res: [bus 00-ff] end is updated to 00 May 7 23:44:02.227874 kernel: pci ee5b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 7 23:44:02.128380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:44:02.189994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:02.276408 kernel: mlx5_core ee5b:00:02.0: enabling device (0000 -> 0002) May 7 23:44:02.498510 kernel: mlx5_core ee5b:00:02.0: firmware version: 16.30.1284 May 7 23:44:02.498647 kernel: hv_netvsc 002248b7-303a-0022-48b7-303a002248b7 eth0: VF registering: eth1 May 7 23:44:02.498750 kernel: mlx5_core ee5b:00:02.0 eth1: joined to eth0 May 7 23:44:02.498854 kernel: mlx5_core ee5b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 7 23:44:02.506058 kernel: mlx5_core ee5b:00:02.0 enP61019s1: renamed from eth1 May 7 23:44:02.763040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 7 23:44:02.839162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 7 23:44:02.864062 kernel: BTRFS: device fsid a4d66dad-2d34-4ed0-87a7-f6519531b08f devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (482) May 7 23:44:02.874074 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (498) May 7 23:44:02.881645 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 7 23:44:02.889120 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 7 23:44:02.905582 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 7 23:44:02.923788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 7 23:44:02.959063 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 7 23:44:02.967060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 7 23:44:03.978085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 7 23:44:03.979078 disk-uuid[601]: The operation has completed successfully. May 7 23:44:04.035282 systemd[1]: disk-uuid.service: Deactivated successfully. May 7 23:44:04.037201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 7 23:44:04.096183 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 7 23:44:04.109865 sh[687]: Success May 7 23:44:04.145085 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 7 23:44:04.357892 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 7 23:44:04.373317 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 7 23:44:04.384896 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 7 23:44:04.412125 kernel: BTRFS info (device dm-0): first mount of filesystem a4d66dad-2d34-4ed0-87a7-f6519531b08f May 7 23:44:04.412167 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:04.419534 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 7 23:44:04.424916 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 7 23:44:04.429410 kernel: BTRFS info (device dm-0): using free space tree May 7 23:44:04.747281 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 7 23:44:04.752169 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 7 23:44:04.772266 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 7 23:44:04.780244 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 7 23:44:04.820370 kernel: BTRFS info (device sda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:04.820432 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:04.824530 kernel: BTRFS info (device sda6): using free space tree May 7 23:44:04.850063 kernel: BTRFS info (device sda6): auto enabling async discard May 7 23:44:04.861089 kernel: BTRFS info (device sda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:04.865972 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 7 23:44:04.887450 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 7 23:44:04.913379 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:44:04.933205 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:44:04.962944 systemd-networkd[868]: lo: Link UP May 7 23:44:04.962952 systemd-networkd[868]: lo: Gained carrier May 7 23:44:04.967761 systemd-networkd[868]: Enumeration completed May 7 23:44:04.968053 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:44:04.969071 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:44:04.969075 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:44:04.974999 systemd[1]: Reached target network.target - Network. May 7 23:44:05.065049 kernel: mlx5_core ee5b:00:02.0 enP61019s1: Link up May 7 23:44:05.104523 kernel: hv_netvsc 002248b7-303a-0022-48b7-303a002248b7 eth0: Data path switched to VF: enP61019s1 May 7 23:44:05.104814 systemd-networkd[868]: enP61019s1: Link UP May 7 23:44:05.104884 systemd-networkd[868]: eth0: Link UP May 7 23:44:05.104974 systemd-networkd[868]: eth0: Gained carrier May 7 23:44:05.104982 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:44:05.132298 systemd-networkd[868]: enP61019s1: Gained carrier May 7 23:44:05.143104 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 7 23:44:05.716243 ignition[835]: Ignition 2.20.0 May 7 23:44:05.716253 ignition[835]: Stage: fetch-offline May 7 23:44:05.721188 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:44:05.716289 ignition[835]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:05.716298 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 7 23:44:05.716387 ignition[835]: parsed url from cmdline: "" May 7 23:44:05.716390 ignition[835]: no config URL provided May 7 23:44:05.716395 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" May 7 23:44:05.749325 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 7 23:44:05.716401 ignition[835]: no config at "/usr/lib/ignition/user.ign" May 7 23:44:05.716406 ignition[835]: failed to fetch config: resource requires networking May 7 23:44:05.716574 ignition[835]: Ignition finished successfully May 7 23:44:05.773735 ignition[879]: Ignition 2.20.0 May 7 23:44:05.773742 ignition[879]: Stage: fetch May 7 23:44:05.773929 ignition[879]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:05.773938 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 7 23:44:05.774088 ignition[879]: parsed url from cmdline: "" May 7 23:44:05.774091 ignition[879]: no config URL provided May 7 23:44:05.774096 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" May 7 23:44:05.774105 ignition[879]: no config at "/usr/lib/ignition/user.ign" May 7 23:44:05.774158 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 7 23:44:05.882910 ignition[879]: GET result: OK May 7 23:44:05.883051 ignition[879]: config has been read from IMDS userdata May 7 23:44:05.883089 ignition[879]: parsing config with SHA512: b52031bdec831548023e62f6ea561cd6238c052b544bef5681a4be46597ece1d28c6045234489ad6aae3049e3e45c34e43e26f175bdd671f4524c5ec0c18f012 May 7 23:44:05.887839 unknown[879]: fetched base config from "system" May 7 23:44:05.888818 ignition[879]: fetch: fetch complete May 7 23:44:05.887847 unknown[879]: fetched base config from "system" May 7 23:44:05.888824 ignition[879]: fetch: fetch passed May 7 23:44:05.887858 unknown[879]: fetched user config from "azure" May 7 23:44:05.888920 ignition[879]: Ignition finished successfully May 7 23:44:05.892832 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 7 23:44:05.913305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 7 23:44:05.938429 ignition[885]: Ignition 2.20.0 May 7 23:44:05.938440 ignition[885]: Stage: kargs May 7 23:44:05.944677 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 7 23:44:05.938609 ignition[885]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:05.938618 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 7 23:44:05.965284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 7 23:44:05.939529 ignition[885]: kargs: kargs passed May 7 23:44:05.939573 ignition[885]: Ignition finished successfully May 7 23:44:05.990052 ignition[892]: Ignition 2.20.0 May 7 23:44:05.990070 ignition[892]: Stage: disks May 7 23:44:05.994872 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 7 23:44:05.990289 ignition[892]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:05.990300 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 7 23:44:06.008250 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 7 23:44:05.991350 ignition[892]: disks: disks passed May 7 23:44:06.019401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 7 23:44:05.991402 ignition[892]: Ignition finished successfully May 7 23:44:06.032744 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:44:06.044399 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:44:06.053279 systemd[1]: Reached target basic.target - Basic System. May 7 23:44:06.081282 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 7 23:44:06.142527 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 7 23:44:06.149408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 7 23:44:06.172236 systemd[1]: Mounting sysroot.mount - /sysroot... May 7 23:44:06.228094 kernel: EXT4-fs (sda9): mounted filesystem f291ddc8-664e-45dc-bbf9-8344dca1a297 r/w with ordered data mode. Quota mode: none. May 7 23:44:06.228527 systemd[1]: Mounted sysroot.mount - /sysroot. May 7 23:44:06.236041 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 7 23:44:06.285118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:44:06.292176 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 7 23:44:06.303197 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 7 23:44:06.321627 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 7 23:44:06.321673 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:44:06.342429 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 7 23:44:06.374169 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (912) May 7 23:44:06.366327 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 7 23:44:06.400206 kernel: BTRFS info (device sda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:06.400280 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:06.400291 kernel: BTRFS info (device sda6): using free space tree May 7 23:44:06.412770 kernel: BTRFS info (device sda6): auto enabling async discard May 7 23:44:06.407562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:44:06.440148 systemd-networkd[868]: enP61019s1: Gained IPv6LL May 7 23:44:06.830406 coreos-metadata[914]: May 07 23:44:06.830 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 7 23:44:06.840948 coreos-metadata[914]: May 07 23:44:06.840 INFO Fetch successful May 7 23:44:06.847072 coreos-metadata[914]: May 07 23:44:06.846 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 7 23:44:06.862907 coreos-metadata[914]: May 07 23:44:06.862 INFO Fetch successful May 7 23:44:06.871384 coreos-metadata[914]: May 07 23:44:06.871 INFO wrote hostname ci-4230.1.1-n-afbb805c8a to /sysroot/etc/hostname May 7 23:44:06.881568 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 7 23:44:07.076056 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory May 7 23:44:07.082235 systemd-networkd[868]: eth0: Gained IPv6LL May 7 23:44:07.120349 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory May 7 23:44:07.142722 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory May 7 23:44:07.183735 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory May 7 23:44:07.799095 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 7 23:44:07.817218 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 7 23:44:07.831222 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 7 23:44:07.851569 kernel: BTRFS info (device sda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:07.844053 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 7 23:44:07.869926 ignition[1036]: INFO : Ignition 2.20.0 May 7 23:44:07.869926 ignition[1036]: INFO : Stage: mount May 7 23:44:07.879516 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:44:07.879516 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 7 23:44:07.879516 ignition[1036]: INFO : mount: mount passed May 7 23:44:07.879516 ignition[1036]: INFO : Ignition finished successfully May 7 23:44:07.880804 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 7 23:44:07.890504 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 7 23:44:07.916262 systemd[1]: Starting ignition-files.service - Ignition (files)... May 7 23:44:07.929298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:44:07.958174 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) May 7 23:44:07.968024 kernel: BTRFS info (device sda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:07.973903 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:07.978127 kernel: BTRFS info (device sda6): using free space tree May 7 23:44:07.985055 kernel: BTRFS info (device sda6): auto enabling async discard May 7 23:44:07.985970 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:44:08.013514 ignition[1065]: INFO : Ignition 2.20.0 May 7 23:44:08.018739 ignition[1065]: INFO : Stage: files May 7 23:44:08.018739 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:44:08.018739 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 7 23:44:08.018739 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping May 7 23:44:08.048305 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 7 23:44:08.048305 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 7 23:44:08.112470 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 7 23:44:08.121786 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 7 23:44:08.121786 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 7 23:44:08.112919 unknown[1065]: wrote ssh authorized keys file for user: core May 7 23:44:08.142782 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 7 23:44:08.142782 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 7 23:44:08.195576 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 7 23:44:08.311918 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 7 23:44:08.322862 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:44:08.322862 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 7 23:44:08.840689 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 7 23:44:09.294722 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:44:09.294722 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 7 23:44:09.313780 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 7 23:44:09.709620 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 7 23:44:09.922574 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 7 23:44:09.922574 ignition[1065]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 7 23:44:09.941907 ignition[1065]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:44:09.941907 ignition[1065]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:44:09.941907 ignition[1065]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 7 23:44:09.941907 ignition[1065]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 7 23:44:09.941907 ignition[1065]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 7 23:44:09.993159 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 7 23:44:09.993159 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 7 23:44:09.993159 ignition[1065]: INFO : files: files passed May 7 23:44:09.993159 ignition[1065]: INFO : Ignition finished successfully May 7 23:44:09.955250 systemd[1]: Finished ignition-files.service - Ignition (files). May 7 23:44:09.993331 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 7 23:44:10.012226 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 7 23:44:10.039782 systemd[1]: ignition-quench.service: Deactivated successfully. May 7 23:44:10.083420 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:44:10.083420 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 7 23:44:10.039870 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 7 23:44:10.108117 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:44:10.048424 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:44:10.063430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 7 23:44:10.100291 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 7 23:44:10.145625 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 7 23:44:10.145759 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 7 23:44:10.158076 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 7 23:44:10.170210 systemd[1]: Reached target initrd.target - Initrd Default Target. May 7 23:44:10.181299 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 7 23:44:10.200285 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 7 23:44:10.223131 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:44:10.241251 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 7 23:44:10.258809 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 7 23:44:10.265456 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:44:10.277380 systemd[1]: Stopped target timers.target - Timer Units. May 7 23:44:10.288325 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 7 23:44:10.288489 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:44:10.304630 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 7 23:44:10.316118 systemd[1]: Stopped target basic.target - Basic System. May 7 23:44:10.326079 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 7 23:44:10.336350 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:44:10.350275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 7 23:44:10.365005 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 7 23:44:10.378391 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:44:10.391679 systemd[1]: Stopped target sysinit.target - System Initialization. May 7 23:44:10.404091 systemd[1]: Stopped target local-fs.target - Local File Systems. May 7 23:44:10.414334 systemd[1]: Stopped target swap.target - Swaps. May 7 23:44:10.423491 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 7 23:44:10.423654 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 7 23:44:10.438672 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 7 23:44:10.451944 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:44:10.465178 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 7 23:44:10.477830 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:44:10.484941 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 7 23:44:10.485118 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 7 23:44:10.503564 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 7 23:44:10.503738 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:44:10.515417 systemd[1]: ignition-files.service: Deactivated successfully. May 7 23:44:10.515557 systemd[1]: Stopped ignition-files.service - Ignition (files). May 7 23:44:10.526547 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 7 23:44:10.526694 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 7 23:44:10.561718 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 7 23:44:10.573334 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 7 23:44:10.582897 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 7 23:44:10.583143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:44:10.607890 ignition[1116]: INFO : Ignition 2.20.0 May 7 23:44:10.607890 ignition[1116]: INFO : Stage: umount May 7 23:44:10.607890 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:44:10.607890 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 7 23:44:10.607890 ignition[1116]: INFO : umount: umount passed May 7 23:44:10.607890 ignition[1116]: INFO : Ignition finished successfully May 7 23:44:10.602197 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 7 23:44:10.602355 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:44:10.623260 systemd[1]: ignition-mount.service: Deactivated successfully. May 7 23:44:10.623354 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 7 23:44:10.637340 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 7 23:44:10.637449 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 7 23:44:10.656464 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 7 23:44:10.657927 systemd[1]: ignition-disks.service: Deactivated successfully. May 7 23:44:10.657990 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 7 23:44:10.668074 systemd[1]: ignition-kargs.service: Deactivated successfully. May 7 23:44:10.668130 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 7 23:44:10.674672 systemd[1]: ignition-fetch.service: Deactivated successfully. May 7 23:44:10.674724 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 7 23:44:10.685098 systemd[1]: Stopped target network.target - Network. May 7 23:44:10.700173 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 7 23:44:10.700237 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:44:10.712144 systemd[1]: Stopped target paths.target - Path Units. May 7 23:44:10.722689 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 7 23:44:10.727268 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:44:10.733873 systemd[1]: Stopped target slices.target - Slice Units. May 7 23:44:10.749619 systemd[1]: Stopped target sockets.target - Socket Units. May 7 23:44:10.759164 systemd[1]: iscsid.socket: Deactivated successfully. May 7 23:44:10.759212 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:44:10.770005 systemd[1]: iscsiuio.socket: Deactivated successfully. May 7 23:44:10.770051 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:44:10.780410 systemd[1]: ignition-setup.service: Deactivated successfully. May 7 23:44:10.780461 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 7 23:44:10.791410 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 7 23:44:10.791454 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 7 23:44:10.802294 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 7 23:44:10.812129 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 7 23:44:10.828049 systemd[1]: systemd-resolved.service: Deactivated successfully. May 7 23:44:10.828145 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 7 23:44:10.842898 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 7 23:44:10.843624 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 7 23:44:10.843721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:44:10.865894 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 7 23:44:10.866156 systemd[1]: systemd-networkd.service: Deactivated successfully. May 7 23:44:10.866276 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 7 23:44:10.895618 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 7 23:44:11.103593 kernel: hv_netvsc 002248b7-303a-0022-48b7-303a002248b7 eth0: Data path switched from VF: enP61019s1 May 7 23:44:10.896252 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 7 23:44:10.896320 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 7 23:44:10.921278 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 7 23:44:10.931135 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 7 23:44:10.931216 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:44:10.948662 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:44:10.948724 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:44:10.964407 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 7 23:44:10.964458 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 7 23:44:10.970511 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:44:10.988992 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 7 23:44:11.013276 systemd[1]: systemd-udevd.service: Deactivated successfully. May 7 23:44:11.013422 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:44:11.029662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 7 23:44:11.029736 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 7 23:44:11.041999 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 7 23:44:11.042054 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:44:11.054888 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 7 23:44:11.054942 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 7 23:44:11.084703 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 7 23:44:11.084771 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 7 23:44:11.096965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:44:11.097062 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:11.148300 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 7 23:44:11.166335 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 7 23:44:11.166426 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:44:11.188960 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 7 23:44:11.189054 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:44:11.198191 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 7 23:44:11.198250 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:44:11.212201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:44:11.212262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:11.232652 systemd[1]: sysroot-boot.service: Deactivated successfully. May 7 23:44:11.232750 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 7 23:44:11.245201 systemd[1]: network-cleanup.service: Deactivated successfully. May 7 23:44:11.245286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 7 23:44:11.257527 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 7 23:44:11.257604 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 7 23:44:11.445170 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 7 23:44:11.279550 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 7 23:44:11.292034 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 7 23:44:11.292139 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 7 23:44:11.324360 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 7 23:44:11.356983 systemd[1]: Switching root. May 7 23:44:11.471256 systemd-journald[218]: Journal stopped May 7 23:44:16.308207 kernel: SELinux: policy capability network_peer_controls=1 May 7 23:44:16.308233 kernel: SELinux: policy capability open_perms=1 May 7 23:44:16.308243 kernel: SELinux: policy capability extended_socket_class=1 May 7 23:44:16.308251 kernel: SELinux: policy capability always_check_network=0 May 7 23:44:16.308261 kernel: SELinux: policy capability cgroup_seclabel=1 May 7 23:44:16.308268 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 7 23:44:16.308277 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 7 23:44:16.308285 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 7 23:44:16.308293 kernel: audit: type=1403 audit(1746661452.443:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 7 23:44:16.308303 systemd[1]: Successfully loaded SELinux policy in 146.752ms. May 7 23:44:16.308314 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.466ms. May 7 23:44:16.308325 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:44:16.308334 systemd[1]: Detected virtualization microsoft. May 7 23:44:16.308342 systemd[1]: Detected architecture arm64. May 7 23:44:16.308351 systemd[1]: Detected first boot. May 7 23:44:16.308362 systemd[1]: Hostname set to . May 7 23:44:16.308371 systemd[1]: Initializing machine ID from random generator. May 7 23:44:16.308379 zram_generator::config[1161]: No configuration found. May 7 23:44:16.308388 kernel: NET: Registered PF_VSOCK protocol family May 7 23:44:16.308397 systemd[1]: Populated /etc with preset unit settings. May 7 23:44:16.308406 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 7 23:44:16.308415 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 7 23:44:16.308425 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 7 23:44:16.308434 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 7 23:44:16.308443 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 7 23:44:16.308452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 7 23:44:16.308461 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 7 23:44:16.308470 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 7 23:44:16.308479 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 7 23:44:16.308490 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 7 23:44:16.308499 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 7 23:44:16.308510 systemd[1]: Created slice user.slice - User and Session Slice. May 7 23:44:16.308519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:44:16.308533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:44:16.308542 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 7 23:44:16.308551 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 7 23:44:16.308560 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 7 23:44:16.308571 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:44:16.308580 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 7 23:44:16.308588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:44:16.308600 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 7 23:44:16.308609 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 7 23:44:16.308618 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 7 23:44:16.308627 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 7 23:44:16.308636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:44:16.308647 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:44:16.308656 systemd[1]: Reached target slices.target - Slice Units. May 7 23:44:16.308665 systemd[1]: Reached target swap.target - Swaps. May 7 23:44:16.308675 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 7 23:44:16.308684 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 7 23:44:16.308693 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 7 23:44:16.308704 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:44:16.308715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:44:16.308724 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:44:16.308733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 7 23:44:16.308742 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 7 23:44:16.308752 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 7 23:44:16.308761 systemd[1]: Mounting media.mount - External Media Directory... May 7 23:44:16.308772 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 7 23:44:16.308781 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 7 23:44:16.308790 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 7 23:44:16.308800 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 7 23:44:16.308809 systemd[1]: Reached target machines.target - Containers. May 7 23:44:16.308819 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 7 23:44:16.308828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:44:16.308838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:44:16.308848 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 7 23:44:16.308858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:44:16.308867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:44:16.308876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:44:16.308885 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 7 23:44:16.308894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:44:16.308904 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 7 23:44:16.308914 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 7 23:44:16.308925 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 7 23:44:16.308935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 7 23:44:16.308944 systemd[1]: Stopped systemd-fsck-usr.service. May 7 23:44:16.308953 kernel: fuse: init (API version 7.39) May 7 23:44:16.308962 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:44:16.308971 kernel: loop: module loaded May 7 23:44:16.308980 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:44:16.308989 kernel: ACPI: bus type drm_connector registered May 7 23:44:16.308997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:44:16.309008 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 7 23:44:16.309241 systemd-journald[1266]: Collecting audit messages is disabled. May 7 23:44:16.309270 systemd-journald[1266]: Journal started May 7 23:44:16.309294 systemd-journald[1266]: Runtime Journal (/run/log/journal/018828677cf34240b4692e62e734cc07) is 8M, max 78.5M, 70.5M free. May 7 23:44:15.360524 systemd[1]: Queued start job for default target multi-user.target. May 7 23:44:15.367999 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 7 23:44:15.368394 systemd[1]: systemd-journald.service: Deactivated successfully. May 7 23:44:15.368719 systemd[1]: systemd-journald.service: Consumed 3.369s CPU time. May 7 23:44:16.324547 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 7 23:44:16.344353 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 7 23:44:16.367137 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:44:16.367206 systemd[1]: verity-setup.service: Deactivated successfully. May 7 23:44:16.375743 systemd[1]: Stopped verity-setup.service. May 7 23:44:16.396141 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:44:16.397021 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 7 23:44:16.403270 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 7 23:44:16.409527 systemd[1]: Mounted media.mount - External Media Directory. May 7 23:44:16.415331 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 7 23:44:16.421521 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 7 23:44:16.428178 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 7 23:44:16.435059 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 7 23:44:16.444108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:44:16.451331 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 7 23:44:16.452083 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 7 23:44:16.460641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:44:16.460839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:44:16.467460 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:44:16.467628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:44:16.475543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:44:16.475720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:44:16.483390 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 7 23:44:16.485152 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 7 23:44:16.493813 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:44:16.493989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:44:16.503762 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:44:16.511620 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 7 23:44:16.518840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 7 23:44:16.527877 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 7 23:44:16.536300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:44:16.555947 systemd[1]: Reached target network-pre.target - Preparation for Network. May 7 23:44:16.573146 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 7 23:44:16.582125 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 7 23:44:16.588946 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 7 23:44:16.588989 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:44:16.598652 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 7 23:44:16.609777 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 7 23:44:16.617629 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 7 23:44:16.623362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:44:16.625118 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 7 23:44:16.633315 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 7 23:44:16.640766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:44:16.641966 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 7 23:44:16.649082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:44:16.650157 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:44:16.659277 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 7 23:44:16.669228 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 7 23:44:16.677343 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 7 23:44:16.691781 systemd-journald[1266]: Time spent on flushing to /var/log/journal/018828677cf34240b4692e62e734cc07 is 14.201ms for 911 entries. May 7 23:44:16.691781 systemd-journald[1266]: System Journal (/var/log/journal/018828677cf34240b4692e62e734cc07) is 8M, max 2.6G, 2.6G free. May 7 23:44:16.749208 systemd-journald[1266]: Received client request to flush runtime journal. May 7 23:44:16.749258 kernel: loop0: detected capacity change from 0 to 113512 May 7 23:44:16.699921 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 7 23:44:16.706715 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 7 23:44:16.716474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 7 23:44:16.729018 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 7 23:44:16.741955 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 7 23:44:16.753273 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 7 23:44:16.765022 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 7 23:44:16.775429 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 7 23:44:16.795833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:44:16.820337 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 7 23:44:16.821618 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 7 23:44:16.842580 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. May 7 23:44:16.843062 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. May 7 23:44:16.847581 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:44:16.863217 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 7 23:44:17.028676 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 7 23:44:17.048531 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:44:17.068070 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. May 7 23:44:17.068087 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. May 7 23:44:17.073042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:44:17.087110 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 7 23:44:17.157074 kernel: loop1: detected capacity change from 0 to 123192 May 7 23:44:17.571057 kernel: loop2: detected capacity change from 0 to 28720 May 7 23:44:17.937081 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 7 23:44:17.952221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:44:17.967073 kernel: loop3: detected capacity change from 0 to 189592 May 7 23:44:17.981201 systemd-udevd[1330]: Using default interface naming scheme 'v255'. May 7 23:44:18.002059 kernel: loop4: detected capacity change from 0 to 113512 May 7 23:44:18.012102 kernel: loop5: detected capacity change from 0 to 123192 May 7 23:44:18.023069 kernel: loop6: detected capacity change from 0 to 28720 May 7 23:44:18.033047 kernel: loop7: detected capacity change from 0 to 189592 May 7 23:44:18.038221 (sd-merge)[1332]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 7 23:44:18.038685 (sd-merge)[1332]: Merged extensions into '/usr'. May 7 23:44:18.042297 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... May 7 23:44:18.042431 systemd[1]: Reloading... May 7 23:44:18.104070 zram_generator::config[1360]: No configuration found. May 7 23:44:18.232755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:44:18.303068 systemd[1]: Reloading finished in 260 ms. May 7 23:44:18.324160 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 7 23:44:18.345318 systemd[1]: Starting ensure-sysext.service... May 7 23:44:18.351664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:44:18.366200 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:44:18.383224 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:44:18.402072 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 7 23:44:18.402298 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 7 23:44:18.403728 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 7 23:44:18.403966 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. May 7 23:44:18.404022 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. May 7 23:44:18.414661 systemd[1]: Reload requested from client PID 1415 ('systemctl') (unit ensure-sysext.service)... May 7 23:44:18.414808 systemd[1]: Reloading... May 7 23:44:18.426719 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:44:18.426730 systemd-tmpfiles[1416]: Skipping /boot May 7 23:44:18.447616 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:44:18.447632 systemd-tmpfiles[1416]: Skipping /boot May 7 23:44:18.560101 zram_generator::config[1473]: No configuration found. May 7 23:44:18.605325 kernel: hv_vmbus: registering driver hv_balloon May 7 23:44:18.605423 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 7 23:44:18.616064 kernel: mousedev: PS/2 mouse device common for all mice May 7 23:44:18.616163 kernel: hv_balloon: Memory hot add disabled on ARM64 May 7 23:44:18.626633 kernel: hv_vmbus: registering driver hyperv_fb May 7 23:44:18.641077 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 7 23:44:18.654072 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 7 23:44:18.658263 kernel: Console: switching to colour dummy device 80x25 May 7 23:44:18.661681 kernel: Console: switching to colour frame buffer device 128x48 May 7 23:44:18.735126 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1432) May 7 23:44:18.760162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:44:18.854714 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 7 23:44:18.854822 systemd[1]: Reloading finished in 439 ms. May 7 23:44:18.869813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:44:18.916080 systemd[1]: Finished ensure-sysext.service. May 7 23:44:18.930215 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 7 23:44:18.956978 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 7 23:44:18.974181 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:44:19.005186 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 7 23:44:19.012658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:44:19.016278 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 7 23:44:19.025891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:44:19.034604 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:44:19.046234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:44:19.055955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:44:19.062832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:44:19.064209 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 7 23:44:19.076237 lvm[1607]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:44:19.078102 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:44:19.080246 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 7 23:44:19.097584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:44:19.103405 systemd[1]: Reached target time-set.target - System Time Set. May 7 23:44:19.115761 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 7 23:44:19.126292 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 7 23:44:19.142164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:44:19.152244 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 7 23:44:19.160854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:44:19.161362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:44:19.168467 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:44:19.168815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:44:19.176087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:44:19.176518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:44:19.184333 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:44:19.184505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:44:19.190758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 7 23:44:19.197996 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 7 23:44:19.214878 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:44:19.231478 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 7 23:44:19.241687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:44:19.241941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:44:19.244957 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 7 23:44:19.253480 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 7 23:44:19.264184 lvm[1649]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:44:19.272591 augenrules[1655]: No rules May 7 23:44:19.275507 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:44:19.275727 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:44:19.303951 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 7 23:44:19.358531 systemd-networkd[1428]: lo: Link UP May 7 23:44:19.358833 systemd-networkd[1428]: lo: Gained carrier May 7 23:44:19.360833 systemd-networkd[1428]: Enumeration completed May 7 23:44:19.360942 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:44:19.361378 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:44:19.361454 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:44:19.376322 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 7 23:44:19.384359 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 7 23:44:19.404002 systemd-resolved[1619]: Positive Trust Anchors: May 7 23:44:19.404020 systemd-resolved[1619]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:44:19.404236 systemd-resolved[1619]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:44:19.423670 systemd-resolved[1619]: Using system hostname 'ci-4230.1.1-n-afbb805c8a'. May 7 23:44:19.435087 kernel: mlx5_core ee5b:00:02.0 enP61019s1: Link up May 7 23:44:19.463300 kernel: hv_netvsc 002248b7-303a-0022-48b7-303a002248b7 eth0: Data path switched to VF: enP61019s1 May 7 23:44:19.466168 systemd-networkd[1428]: enP61019s1: Link UP May 7 23:44:19.466339 systemd-networkd[1428]: eth0: Link UP May 7 23:44:19.466342 systemd-networkd[1428]: eth0: Gained carrier May 7 23:44:19.466363 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:44:19.468625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:19.475210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:44:19.475394 systemd-networkd[1428]: enP61019s1: Gained carrier May 7 23:44:19.481277 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 7 23:44:19.488645 systemd[1]: Reached target network.target - Network. May 7 23:44:19.493100 systemd-networkd[1428]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 7 23:44:19.493385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:44:19.668639 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 7 23:44:19.676333 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 7 23:44:20.776232 systemd-networkd[1428]: enP61019s1: Gained IPv6LL May 7 23:44:21.416224 systemd-networkd[1428]: eth0: Gained IPv6LL May 7 23:44:21.420080 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 7 23:44:21.429215 systemd[1]: Reached target network-online.target - Network is Online. May 7 23:44:22.365878 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 7 23:44:22.379063 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 7 23:44:22.396186 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 7 23:44:22.404989 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 7 23:44:22.412155 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:44:22.418616 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 7 23:44:22.425796 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 7 23:44:22.432823 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 7 23:44:22.438512 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 7 23:44:22.444999 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 7 23:44:22.451704 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 7 23:44:22.451741 systemd[1]: Reached target paths.target - Path Units. May 7 23:44:22.456437 systemd[1]: Reached target timers.target - Timer Units. May 7 23:44:22.490946 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 7 23:44:22.498934 systemd[1]: Starting docker.socket - Docker Socket for the API... May 7 23:44:22.506209 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 7 23:44:22.513129 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 7 23:44:22.519632 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 7 23:44:22.535697 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 7 23:44:22.542267 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 7 23:44:22.549106 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 7 23:44:22.555217 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:44:22.560572 systemd[1]: Reached target basic.target - Basic System. May 7 23:44:22.565961 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 7 23:44:22.565987 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 7 23:44:22.578133 systemd[1]: Starting chronyd.service - NTP client/server... May 7 23:44:22.585624 systemd[1]: Starting containerd.service - containerd container runtime... May 7 23:44:22.601290 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 7 23:44:22.612255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 7 23:44:22.618625 (chronyd)[1676]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 7 23:44:22.620142 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 7 23:44:22.631200 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 7 23:44:22.639945 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 7 23:44:22.640141 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 7 23:44:22.641542 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 7 23:44:22.649427 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 7 23:44:22.651805 KVP[1686]: KVP starting; pid is:1686 May 7 23:44:22.656351 chronyd[1689]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 7 23:44:22.657117 jq[1683]: false May 7 23:44:22.652208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:44:22.668427 kernel: hv_utils: KVP IC version 4.0 May 7 23:44:22.668242 KVP[1686]: KVP LIC Version: 3.1 May 7 23:44:22.663236 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 7 23:44:22.674548 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 7 23:44:22.675416 chronyd[1689]: Timezone right/UTC failed leap second check, ignoring May 7 23:44:22.679855 chronyd[1689]: Loaded seccomp filter (level 2) May 7 23:44:22.683976 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 7 23:44:22.701430 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 7 23:44:22.704055 extend-filesystems[1684]: Found loop4 May 7 23:44:22.724365 extend-filesystems[1684]: Found loop5 May 7 23:44:22.724365 extend-filesystems[1684]: Found loop6 May 7 23:44:22.724365 extend-filesystems[1684]: Found loop7 May 7 23:44:22.724365 extend-filesystems[1684]: Found sda May 7 23:44:22.724365 extend-filesystems[1684]: Found sda1 May 7 23:44:22.724365 extend-filesystems[1684]: Found sda2 May 7 23:44:22.724365 extend-filesystems[1684]: Found sda3 May 7 23:44:22.724365 extend-filesystems[1684]: Found usr May 7 23:44:22.724365 extend-filesystems[1684]: Found sda4 May 7 23:44:22.724365 extend-filesystems[1684]: Found sda6 May 7 23:44:22.724365 extend-filesystems[1684]: Found sda7 May 7 23:44:22.724365 extend-filesystems[1684]: Found sda9 May 7 23:44:22.724365 extend-filesystems[1684]: Checking size of /dev/sda9 May 7 23:44:22.714320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 7 23:44:22.859322 extend-filesystems[1684]: Old size kept for /dev/sda9 May 7 23:44:22.859322 extend-filesystems[1684]: Found sr0 May 7 23:44:22.741499 dbus-daemon[1682]: [system] SELinux support is enabled May 7 23:44:22.742932 systemd[1]: Starting systemd-logind.service - User Login Management... May 7 23:44:22.769092 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 7 23:44:22.769622 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 7 23:44:22.870293 coreos-metadata[1678]: May 07 23:44:22.870 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 7 23:44:22.870479 update_engine[1710]: I20250507 23:44:22.840186 1710 main.cc:92] Flatcar Update Engine starting May 7 23:44:22.870479 update_engine[1710]: I20250507 23:44:22.841387 1710 update_check_scheduler.cc:74] Next update check in 5m53s May 7 23:44:22.779678 systemd[1]: Starting update-engine.service - Update Engine... May 7 23:44:22.870725 jq[1716]: true May 7 23:44:22.816184 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 7 23:44:22.874136 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 7 23:44:22.882362 coreos-metadata[1678]: May 07 23:44:22.882 INFO Fetch successful May 7 23:44:22.882362 coreos-metadata[1678]: May 07 23:44:22.882 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 7 23:44:22.886534 systemd[1]: Started chronyd.service - NTP client/server. May 7 23:44:22.893634 coreos-metadata[1678]: May 07 23:44:22.893 INFO Fetch successful May 7 23:44:22.893820 coreos-metadata[1678]: May 07 23:44:22.893 INFO Fetching http://168.63.129.16/machine/59ac7fe9-3f4c-4137-a987-56c6870d25d8/a3a375c3%2D5f45%2D4866%2D88f1%2D5b7c6134c2e4.%5Fci%2D4230.1.1%2Dn%2Dafbb805c8a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 7 23:44:22.896723 coreos-metadata[1678]: May 07 23:44:22.896 INFO Fetch successful May 7 23:44:22.897141 coreos-metadata[1678]: May 07 23:44:22.897 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 7 23:44:22.900476 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 7 23:44:22.900701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 7 23:44:22.900966 systemd[1]: extend-filesystems.service: Deactivated successfully. May 7 23:44:22.903157 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 7 23:44:22.911001 coreos-metadata[1678]: May 07 23:44:22.909 INFO Fetch successful May 7 23:44:22.919592 systemd[1]: motdgen.service: Deactivated successfully. May 7 23:44:22.920230 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 7 23:44:22.931057 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 7 23:44:22.945396 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 7 23:44:22.945574 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 7 23:44:22.953746 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1730) May 7 23:44:22.973231 systemd-logind[1703]: New seat seat0. May 7 23:44:22.977174 systemd-logind[1703]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 7 23:44:22.977616 systemd[1]: Started systemd-logind.service - User Login Management. May 7 23:44:22.984111 (ntainerd)[1741]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 7 23:44:22.990826 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 7 23:44:22.990172 dbus-daemon[1682]: [system] Successfully activated service 'org.freedesktop.systemd1' May 7 23:44:22.990855 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 7 23:44:23.001366 jq[1740]: true May 7 23:44:23.003336 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 7 23:44:23.003360 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 7 23:44:23.039131 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 7 23:44:23.052591 systemd[1]: Started update-engine.service - Update Engine. May 7 23:44:23.065350 tar[1737]: linux-arm64/helm May 7 23:44:23.067596 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 7 23:44:23.087235 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 7 23:44:23.236345 bash[1807]: Updated "/home/core/.ssh/authorized_keys" May 7 23:44:23.238751 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 7 23:44:23.255728 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 7 23:44:23.319170 locksmithd[1787]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 7 23:44:23.679116 containerd[1741]: time="2025-05-07T23:44:23.677481320Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 7 23:44:23.709702 containerd[1741]: time="2025-05-07T23:44:23.709642360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 7 23:44:23.711677 containerd[1741]: time="2025-05-07T23:44:23.711634920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 7 23:44:23.711800 containerd[1741]: time="2025-05-07T23:44:23.711785720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 7 23:44:23.711856 containerd[1741]: time="2025-05-07T23:44:23.711843240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 7 23:44:23.712096 containerd[1741]: time="2025-05-07T23:44:23.712073160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712170680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712262880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712276080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712502000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712517280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712530400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712539880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712619240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712804560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712922720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:44:23.713066 containerd[1741]: time="2025-05-07T23:44:23.712935800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 7 23:44:23.713299 containerd[1741]: time="2025-05-07T23:44:23.713006600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 7 23:44:23.713387 containerd[1741]: time="2025-05-07T23:44:23.713366480Z" level=info msg="metadata content store policy set" policy=shared May 7 23:44:23.728494 containerd[1741]: time="2025-05-07T23:44:23.728449800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 7 23:44:23.729474 containerd[1741]: time="2025-05-07T23:44:23.729411640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 7 23:44:23.729628 containerd[1741]: time="2025-05-07T23:44:23.729611480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 7 23:44:23.730252 containerd[1741]: time="2025-05-07T23:44:23.730232040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 7 23:44:23.730361 containerd[1741]: time="2025-05-07T23:44:23.730345640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 7 23:44:23.730667 containerd[1741]: time="2025-05-07T23:44:23.730630240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.730976680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731124720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731143000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731158120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731171760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731184360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731197000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731210240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731224120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731237520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731249360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731260920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731281080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734155 containerd[1741]: time="2025-05-07T23:44:23.731295080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 7 23:44:23.733868 systemd[1]: Started containerd.service - containerd container runtime. May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731307640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731321440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731333720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731349560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731360760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731373320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731385760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731399120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731410240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731421520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731433680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731448080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731468480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731486720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734689 containerd[1741]: time="2025-05-07T23:44:23.731497360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731548360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731567320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731577800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731588560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731597960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731610720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731619480Z" level=info msg="NRI interface is disabled by configuration." May 7 23:44:23.734981 containerd[1741]: time="2025-05-07T23:44:23.731628600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.731902720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.731948960Z" level=info msg="Connect containerd service" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.731984520Z" level=info msg="using legacy CRI server" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.731991480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.732131240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.732914920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733348760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733403880Z" level=info msg=serving... address=/run/containerd/containerd.sock May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733499960Z" level=info msg="Start subscribing containerd event" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733604240Z" level=info msg="Start recovering state" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733668720Z" level=info msg="Start event monitor" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733679280Z" level=info msg="Start snapshots syncer" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733689360Z" level=info msg="Start cni network conf syncer for default" May 7 23:44:23.735169 containerd[1741]: time="2025-05-07T23:44:23.733696600Z" level=info msg="Start streaming server" May 7 23:44:23.746221 containerd[1741]: time="2025-05-07T23:44:23.744073080Z" level=info msg="containerd successfully booted in 0.068966s" May 7 23:44:23.765792 tar[1737]: linux-arm64/LICENSE May 7 23:44:23.765792 tar[1737]: linux-arm64/README.md May 7 23:44:23.776938 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 7 23:44:23.969214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:44:23.976203 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:44:24.382010 kubelet[1845]: E0507 23:44:24.381895 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:44:24.384754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:44:24.385024 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:44:24.385363 systemd[1]: kubelet.service: Consumed 663ms CPU time, 232M memory peak. May 7 23:44:24.698917 sshd_keygen[1715]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 7 23:44:24.718107 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 7 23:44:24.731310 systemd[1]: Starting issuegen.service - Generate /run/issue... May 7 23:44:24.738268 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 7 23:44:24.745702 systemd[1]: issuegen.service: Deactivated successfully. May 7 23:44:24.747190 systemd[1]: Finished issuegen.service - Generate /run/issue. May 7 23:44:24.765434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 7 23:44:24.773086 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 7 23:44:24.794008 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 7 23:44:24.807350 systemd[1]: Started getty@tty1.service - Getty on tty1. May 7 23:44:24.814606 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 7 23:44:24.822548 systemd[1]: Reached target getty.target - Login Prompts. May 7 23:44:24.828142 systemd[1]: Reached target multi-user.target - Multi-User System. May 7 23:44:24.835965 systemd[1]: Startup finished in 695ms (kernel) + 12.297s (initrd) + 12.537s (userspace) = 25.530s. May 7 23:44:25.046239 login[1875]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 7 23:44:25.046645 login[1874]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 7 23:44:25.062108 systemd-logind[1703]: New session 2 of user core. May 7 23:44:25.063272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 7 23:44:25.076353 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 7 23:44:25.100247 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 7 23:44:25.107380 systemd[1]: Starting user@500.service - User Manager for UID 500... May 7 23:44:25.111380 (systemd)[1882]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 7 23:44:25.113720 systemd-logind[1703]: New session c1 of user core. May 7 23:44:25.392724 systemd[1882]: Queued start job for default target default.target. May 7 23:44:25.403977 systemd[1882]: Created slice app.slice - User Application Slice. May 7 23:44:25.404006 systemd[1882]: Reached target paths.target - Paths. May 7 23:44:25.404147 systemd[1882]: Reached target timers.target - Timers. May 7 23:44:25.405408 systemd[1882]: Starting dbus.socket - D-Bus User Message Bus Socket... May 7 23:44:25.414655 systemd[1882]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 7 23:44:25.414715 systemd[1882]: Reached target sockets.target - Sockets. May 7 23:44:25.414759 systemd[1882]: Reached target basic.target - Basic System. May 7 23:44:25.414787 systemd[1882]: Reached target default.target - Main User Target. May 7 23:44:25.414811 systemd[1882]: Startup finished in 294ms. May 7 23:44:25.415095 systemd[1]: Started user@500.service - User Manager for UID 500. May 7 23:44:25.421374 systemd[1]: Started session-2.scope - Session 2 of User core. May 7 23:44:26.048043 login[1875]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 7 23:44:26.054345 systemd-logind[1703]: New session 1 of user core. May 7 23:44:26.061208 systemd[1]: Started session-1.scope - Session 1 of User core. May 7 23:44:26.569054 waagent[1871]: 2025-05-07T23:44:26.567789Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 7 23:44:26.573327 waagent[1871]: 2025-05-07T23:44:26.573255Z INFO Daemon Daemon OS: flatcar 4230.1.1 May 7 23:44:26.577751 waagent[1871]: 2025-05-07T23:44:26.577686Z INFO Daemon Daemon Python: 3.11.11 May 7 23:44:26.581886 waagent[1871]: 2025-05-07T23:44:26.581825Z INFO Daemon Daemon Run daemon May 7 23:44:26.585806 waagent[1871]: 2025-05-07T23:44:26.585751Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.1' May 7 23:44:26.596310 waagent[1871]: 2025-05-07T23:44:26.596241Z INFO Daemon Daemon Using waagent for provisioning May 7 23:44:26.601842 waagent[1871]: 2025-05-07T23:44:26.601790Z INFO Daemon Daemon Activate resource disk May 7 23:44:26.606411 waagent[1871]: 2025-05-07T23:44:26.606362Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 7 23:44:26.618685 waagent[1871]: 2025-05-07T23:44:26.618620Z INFO Daemon Daemon Found device: None May 7 23:44:26.622880 waagent[1871]: 2025-05-07T23:44:26.622826Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 7 23:44:26.630814 waagent[1871]: 2025-05-07T23:44:26.630762Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 7 23:44:26.642139 waagent[1871]: 2025-05-07T23:44:26.642088Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 7 23:44:26.647549 waagent[1871]: 2025-05-07T23:44:26.647503Z INFO Daemon Daemon Running default provisioning handler May 7 23:44:26.659171 waagent[1871]: 2025-05-07T23:44:26.659091Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 7 23:44:26.671844 waagent[1871]: 2025-05-07T23:44:26.671777Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 7 23:44:26.680875 waagent[1871]: 2025-05-07T23:44:26.680812Z INFO Daemon Daemon cloud-init is enabled: False May 7 23:44:26.685623 waagent[1871]: 2025-05-07T23:44:26.685569Z INFO Daemon Daemon Copying ovf-env.xml May 7 23:44:26.795630 waagent[1871]: 2025-05-07T23:44:26.795526Z INFO Daemon Daemon Successfully mounted dvd May 7 23:44:26.810919 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 7 23:44:26.813593 waagent[1871]: 2025-05-07T23:44:26.813514Z INFO Daemon Daemon Detect protocol endpoint May 7 23:44:26.819111 waagent[1871]: 2025-05-07T23:44:26.818994Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 7 23:44:26.825079 waagent[1871]: 2025-05-07T23:44:26.824989Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 7 23:44:26.831829 waagent[1871]: 2025-05-07T23:44:26.831766Z INFO Daemon Daemon Test for route to 168.63.129.16 May 7 23:44:26.837231 waagent[1871]: 2025-05-07T23:44:26.837174Z INFO Daemon Daemon Route to 168.63.129.16 exists May 7 23:44:26.843570 waagent[1871]: 2025-05-07T23:44:26.843518Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 7 23:44:26.875443 waagent[1871]: 2025-05-07T23:44:26.875394Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 7 23:44:26.882296 waagent[1871]: 2025-05-07T23:44:26.882266Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 7 23:44:26.887506 waagent[1871]: 2025-05-07T23:44:26.887457Z INFO Daemon Daemon Server preferred version:2015-04-05 May 7 23:44:26.983712 waagent[1871]: 2025-05-07T23:44:26.983606Z INFO Daemon Daemon Initializing goal state during protocol detection May 7 23:44:26.990397 waagent[1871]: 2025-05-07T23:44:26.990327Z INFO Daemon Daemon Forcing an update of the goal state. May 7 23:44:27.000592 waagent[1871]: 2025-05-07T23:44:27.000539Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 7 23:44:27.046303 waagent[1871]: 2025-05-07T23:44:27.046249Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 7 23:44:27.052686 waagent[1871]: 2025-05-07T23:44:27.052634Z INFO Daemon May 7 23:44:27.056083 waagent[1871]: 2025-05-07T23:44:27.056018Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d096d0f9-22a4-4448-85ca-209ae924376d eTag: 9817216749906255532 source: Fabric] May 7 23:44:27.067665 waagent[1871]: 2025-05-07T23:44:27.067614Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 7 23:44:27.077245 waagent[1871]: 2025-05-07T23:44:27.077157Z INFO Daemon May 7 23:44:27.081185 waagent[1871]: 2025-05-07T23:44:27.081136Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 7 23:44:27.092717 waagent[1871]: 2025-05-07T23:44:27.092667Z INFO Daemon Daemon Downloading artifacts profile blob May 7 23:44:27.174585 waagent[1871]: 2025-05-07T23:44:27.174491Z INFO Daemon Downloaded certificate {'thumbprint': 'AC5261D4179CA46C3F0131A167A803053CC99B70', 'hasPrivateKey': False} May 7 23:44:27.184742 waagent[1871]: 2025-05-07T23:44:27.184691Z INFO Daemon Downloaded certificate {'thumbprint': '86AA7624C7C04A6F10A663A03D78230C5F9A9FD8', 'hasPrivateKey': True} May 7 23:44:27.194925 waagent[1871]: 2025-05-07T23:44:27.194872Z INFO Daemon Fetch goal state completed May 7 23:44:27.206457 waagent[1871]: 2025-05-07T23:44:27.206412Z INFO Daemon Daemon Starting provisioning May 7 23:44:27.211929 waagent[1871]: 2025-05-07T23:44:27.211868Z INFO Daemon Daemon Handle ovf-env.xml. May 7 23:44:27.216458 waagent[1871]: 2025-05-07T23:44:27.216408Z INFO Daemon Daemon Set hostname [ci-4230.1.1-n-afbb805c8a] May 7 23:44:27.239050 waagent[1871]: 2025-05-07T23:44:27.238446Z INFO Daemon Daemon Publish hostname [ci-4230.1.1-n-afbb805c8a] May 7 23:44:27.245339 waagent[1871]: 2025-05-07T23:44:27.245272Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 7 23:44:27.251893 waagent[1871]: 2025-05-07T23:44:27.251836Z INFO Daemon Daemon Primary interface is [eth0] May 7 23:44:27.263745 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:44:27.263754 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:44:27.263780 systemd-networkd[1428]: eth0: DHCP lease lost May 7 23:44:27.265062 waagent[1871]: 2025-05-07T23:44:27.264710Z INFO Daemon Daemon Create user account if not exists May 7 23:44:27.270465 waagent[1871]: 2025-05-07T23:44:27.270404Z INFO Daemon Daemon User core already exists, skip useradd May 7 23:44:27.277587 waagent[1871]: 2025-05-07T23:44:27.277521Z INFO Daemon Daemon Configure sudoer May 7 23:44:27.282940 waagent[1871]: 2025-05-07T23:44:27.282864Z INFO Daemon Daemon Configure sshd May 7 23:44:27.287997 waagent[1871]: 2025-05-07T23:44:27.287933Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 7 23:44:27.300856 waagent[1871]: 2025-05-07T23:44:27.300786Z INFO Daemon Daemon Deploy ssh public key. May 7 23:44:27.313404 systemd-networkd[1428]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 7 23:44:28.412371 waagent[1871]: 2025-05-07T23:44:28.412321Z INFO Daemon Daemon Provisioning complete May 7 23:44:28.430842 waagent[1871]: 2025-05-07T23:44:28.430791Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 7 23:44:28.437997 waagent[1871]: 2025-05-07T23:44:28.437931Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 7 23:44:28.447794 waagent[1871]: 2025-05-07T23:44:28.447732Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 7 23:44:28.582628 waagent[1937]: 2025-05-07T23:44:28.582081Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 7 23:44:28.582628 waagent[1937]: 2025-05-07T23:44:28.582241Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.1 May 7 23:44:28.582628 waagent[1937]: 2025-05-07T23:44:28.582294Z INFO ExtHandler ExtHandler Python: 3.11.11 May 7 23:44:28.784825 waagent[1937]: 2025-05-07T23:44:28.784688Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 7 23:44:28.785161 waagent[1937]: 2025-05-07T23:44:28.785121Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 7 23:44:28.785303 waagent[1937]: 2025-05-07T23:44:28.785270Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 7 23:44:28.796716 waagent[1937]: 2025-05-07T23:44:28.796645Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 7 23:44:28.805076 waagent[1937]: 2025-05-07T23:44:28.805012Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 7 23:44:28.806060 waagent[1937]: 2025-05-07T23:44:28.805699Z INFO ExtHandler May 7 23:44:28.806060 waagent[1937]: 2025-05-07T23:44:28.805775Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d73f1905-650a-4774-94d4-383904356f71 eTag: 9817216749906255532 source: Fabric] May 7 23:44:28.806151 waagent[1937]: 2025-05-07T23:44:28.806079Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 7 23:44:28.806693 waagent[1937]: 2025-05-07T23:44:28.806641Z INFO ExtHandler May 7 23:44:28.806754 waagent[1937]: 2025-05-07T23:44:28.806724Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 7 23:44:28.810763 waagent[1937]: 2025-05-07T23:44:28.810729Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 7 23:44:28.897070 waagent[1937]: 2025-05-07T23:44:28.896324Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC5261D4179CA46C3F0131A167A803053CC99B70', 'hasPrivateKey': False} May 7 23:44:28.897070 waagent[1937]: 2025-05-07T23:44:28.896792Z INFO ExtHandler Downloaded certificate {'thumbprint': '86AA7624C7C04A6F10A663A03D78230C5F9A9FD8', 'hasPrivateKey': True} May 7 23:44:28.897279 waagent[1937]: 2025-05-07T23:44:28.897231Z INFO ExtHandler Fetch goal state completed May 7 23:44:28.913018 waagent[1937]: 2025-05-07T23:44:28.912963Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1937 May 7 23:44:28.913228 waagent[1937]: 2025-05-07T23:44:28.913167Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 7 23:44:28.914854 waagent[1937]: 2025-05-07T23:44:28.914810Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.1', '', 'Flatcar Container Linux by Kinvolk'] May 7 23:44:28.915254 waagent[1937]: 2025-05-07T23:44:28.915217Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 7 23:44:28.935759 waagent[1937]: 2025-05-07T23:44:28.935712Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 7 23:44:28.935953 waagent[1937]: 2025-05-07T23:44:28.935913Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 7 23:44:28.941436 waagent[1937]: 2025-05-07T23:44:28.941397Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 7 23:44:28.947757 systemd[1]: Reload requested from client PID 1952 ('systemctl') (unit waagent.service)... May 7 23:44:28.947773 systemd[1]: Reloading... May 7 23:44:29.022060 zram_generator::config[1991]: No configuration found. May 7 23:44:29.125220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:44:29.227631 systemd[1]: Reloading finished in 279 ms. May 7 23:44:29.245063 waagent[1937]: 2025-05-07T23:44:29.241243Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 7 23:44:29.248161 systemd[1]: Reload requested from client PID 2045 ('systemctl') (unit waagent.service)... May 7 23:44:29.248174 systemd[1]: Reloading... May 7 23:44:29.324340 zram_generator::config[2082]: No configuration found. May 7 23:44:29.429038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:44:29.531814 systemd[1]: Reloading finished in 283 ms. May 7 23:44:29.551045 waagent[1937]: 2025-05-07T23:44:29.548989Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 7 23:44:29.551045 waagent[1937]: 2025-05-07T23:44:29.549188Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 7 23:44:29.882184 waagent[1937]: 2025-05-07T23:44:29.882097Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 7 23:44:29.882789 waagent[1937]: 2025-05-07T23:44:29.882712Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 7 23:44:29.883687 waagent[1937]: 2025-05-07T23:44:29.883593Z INFO ExtHandler ExtHandler Starting env monitor service. May 7 23:44:29.884103 waagent[1937]: 2025-05-07T23:44:29.883988Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 7 23:44:29.885065 waagent[1937]: 2025-05-07T23:44:29.884344Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 7 23:44:29.885065 waagent[1937]: 2025-05-07T23:44:29.884430Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 7 23:44:29.885065 waagent[1937]: 2025-05-07T23:44:29.884618Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 7 23:44:29.885065 waagent[1937]: 2025-05-07T23:44:29.884781Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 7 23:44:29.885065 waagent[1937]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 7 23:44:29.885065 waagent[1937]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 7 23:44:29.885065 waagent[1937]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 7 23:44:29.885065 waagent[1937]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 7 23:44:29.885065 waagent[1937]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 7 23:44:29.885065 waagent[1937]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 7 23:44:29.885445 waagent[1937]: 2025-05-07T23:44:29.885354Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 7 23:44:29.885660 waagent[1937]: 2025-05-07T23:44:29.885444Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 7 23:44:29.886111 waagent[1937]: 2025-05-07T23:44:29.886045Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 7 23:44:29.886190 waagent[1937]: 2025-05-07T23:44:29.886112Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 7 23:44:29.886216 waagent[1937]: 2025-05-07T23:44:29.886176Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 7 23:44:29.886445 waagent[1937]: 2025-05-07T23:44:29.886383Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 7 23:44:29.887141 waagent[1937]: 2025-05-07T23:44:29.887087Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 7 23:44:29.887929 waagent[1937]: 2025-05-07T23:44:29.887860Z INFO EnvHandler ExtHandler Configure routes May 7 23:44:29.888478 waagent[1937]: 2025-05-07T23:44:29.888323Z INFO EnvHandler ExtHandler Gateway:None May 7 23:44:29.888550 waagent[1937]: 2025-05-07T23:44:29.888506Z INFO EnvHandler ExtHandler Routes:None May 7 23:44:29.892978 waagent[1937]: 2025-05-07T23:44:29.892918Z INFO ExtHandler ExtHandler May 7 23:44:29.893097 waagent[1937]: 2025-05-07T23:44:29.893053Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 32c7ffb4-8c09-47d4-9085-2420f7a7546f correlation 1e28458e-c8e9-4e61-a9d7-5671ccc94e75 created: 2025-05-07T23:43:12.171303Z] May 7 23:44:29.893949 waagent[1937]: 2025-05-07T23:44:29.893892Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 7 23:44:29.895826 waagent[1937]: 2025-05-07T23:44:29.895718Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] May 7 23:44:29.935916 waagent[1937]: 2025-05-07T23:44:29.935796Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D6A0B691-B4FF-481B-A410-400888531FF6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 7 23:44:29.940059 waagent[1937]: 2025-05-07T23:44:29.939959Z INFO MonitorHandler ExtHandler Network interfaces: May 7 23:44:29.940059 waagent[1937]: Executing ['ip', '-a', '-o', 'link']: May 7 23:44:29.940059 waagent[1937]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 7 23:44:29.940059 waagent[1937]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:30:3a brd ff:ff:ff:ff:ff:ff May 7 23:44:29.940059 waagent[1937]: 3: enP61019s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:30:3a brd ff:ff:ff:ff:ff:ff\ altname enP61019p0s2 May 7 23:44:29.940059 waagent[1937]: Executing ['ip', '-4', '-a', '-o', 'address']: May 7 23:44:29.940059 waagent[1937]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 7 23:44:29.940059 waagent[1937]: 2: eth0 inet 10.200.20.32/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 7 23:44:29.940059 waagent[1937]: Executing ['ip', '-6', '-a', '-o', 'address']: May 7 23:44:29.940059 waagent[1937]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 7 23:44:29.940059 waagent[1937]: 2: eth0 inet6 fe80::222:48ff:feb7:303a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 7 23:44:29.940059 waagent[1937]: 3: enP61019s1 inet6 fe80::222:48ff:feb7:303a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 7 23:44:30.001143 waagent[1937]: 2025-05-07T23:44:30.000864Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 7 23:44:30.001143 waagent[1937]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 7 23:44:30.001143 waagent[1937]: pkts bytes target prot opt in out source destination May 7 23:44:30.001143 waagent[1937]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 7 23:44:30.001143 waagent[1937]: pkts bytes target prot opt in out source destination May 7 23:44:30.001143 waagent[1937]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 7 23:44:30.001143 waagent[1937]: pkts bytes target prot opt in out source destination May 7 23:44:30.001143 waagent[1937]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 7 23:44:30.001143 waagent[1937]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 7 23:44:30.001143 waagent[1937]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 7 23:44:30.003777 waagent[1937]: 2025-05-07T23:44:30.003713Z INFO EnvHandler ExtHandler Current Firewall rules: May 7 23:44:30.003777 waagent[1937]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 7 23:44:30.003777 waagent[1937]: pkts bytes target prot opt in out source destination May 7 23:44:30.003777 waagent[1937]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 7 23:44:30.003777 waagent[1937]: pkts bytes target prot opt in out source destination May 7 23:44:30.003777 waagent[1937]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 7 23:44:30.003777 waagent[1937]: pkts bytes target prot opt in out source destination May 7 23:44:30.003777 waagent[1937]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 7 23:44:30.003777 waagent[1937]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 7 23:44:30.003777 waagent[1937]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 7 23:44:30.004023 waagent[1937]: 2025-05-07T23:44:30.003984Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 7 23:44:34.623025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 7 23:44:34.630211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:44:34.726686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:44:34.730742 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:44:34.771901 kubelet[2178]: E0507 23:44:34.771783 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:44:34.774010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:44:34.774153 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:44:34.774579 systemd[1]: kubelet.service: Consumed 117ms CPU time, 96.6M memory peak. May 7 23:44:44.873312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 7 23:44:44.883215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:44:44.969969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:44:44.974251 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:44:45.012337 kubelet[2193]: E0507 23:44:45.012232 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:44:45.014314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:44:45.014464 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:44:45.014906 systemd[1]: kubelet.service: Consumed 118ms CPU time, 96.4M memory peak. May 7 23:44:46.479904 chronyd[1689]: Selected source PHC0 May 7 23:44:55.123190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 7 23:44:55.131227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:44:55.413854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:44:55.417610 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:44:55.452783 kubelet[2208]: E0507 23:44:55.452690 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:44:55.455302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:44:55.455443 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:44:55.457109 systemd[1]: kubelet.service: Consumed 118ms CPU time, 94.9M memory peak. May 7 23:44:59.291590 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 7 23:44:59.300288 systemd[1]: Started sshd@0-10.200.20.32:22-10.200.16.10:46960.service - OpenSSH per-connection server daemon (10.200.16.10:46960). May 7 23:44:59.817321 sshd[2216]: Accepted publickey for core from 10.200.16.10 port 46960 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:44:59.818597 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:44:59.822864 systemd-logind[1703]: New session 3 of user core. May 7 23:44:59.832241 systemd[1]: Started session-3.scope - Session 3 of User core. May 7 23:45:00.202591 systemd[1]: Started sshd@1-10.200.20.32:22-10.200.16.10:46962.service - OpenSSH per-connection server daemon (10.200.16.10:46962). May 7 23:45:00.651512 sshd[2221]: Accepted publickey for core from 10.200.16.10 port 46962 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:45:00.652770 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:00.658095 systemd-logind[1703]: New session 4 of user core. May 7 23:45:00.663186 systemd[1]: Started session-4.scope - Session 4 of User core. May 7 23:45:00.980312 sshd[2223]: Connection closed by 10.200.16.10 port 46962 May 7 23:45:00.979780 sshd-session[2221]: pam_unix(sshd:session): session closed for user core May 7 23:45:00.983393 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. May 7 23:45:00.983962 systemd[1]: sshd@1-10.200.20.32:22-10.200.16.10:46962.service: Deactivated successfully. May 7 23:45:00.985836 systemd[1]: session-4.scope: Deactivated successfully. May 7 23:45:00.986938 systemd-logind[1703]: Removed session 4. May 7 23:45:01.067357 systemd[1]: Started sshd@2-10.200.20.32:22-10.200.16.10:46970.service - OpenSSH per-connection server daemon (10.200.16.10:46970). May 7 23:45:01.513660 sshd[2229]: Accepted publickey for core from 10.200.16.10 port 46970 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:45:01.514968 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:01.520084 systemd-logind[1703]: New session 5 of user core. May 7 23:45:01.526244 systemd[1]: Started session-5.scope - Session 5 of User core. May 7 23:45:01.837543 sshd[2231]: Connection closed by 10.200.16.10 port 46970 May 7 23:45:01.838066 sshd-session[2229]: pam_unix(sshd:session): session closed for user core May 7 23:45:01.841639 systemd[1]: sshd@2-10.200.20.32:22-10.200.16.10:46970.service: Deactivated successfully. May 7 23:45:01.843320 systemd[1]: session-5.scope: Deactivated successfully. May 7 23:45:01.844057 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. May 7 23:45:01.844822 systemd-logind[1703]: Removed session 5. May 7 23:45:01.914376 systemd[1]: Started sshd@3-10.200.20.32:22-10.200.16.10:46984.service - OpenSSH per-connection server daemon (10.200.16.10:46984). May 7 23:45:02.333666 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 46984 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:45:02.334903 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:02.340126 systemd-logind[1703]: New session 6 of user core. May 7 23:45:02.342185 systemd[1]: Started session-6.scope - Session 6 of User core. May 7 23:45:02.645222 sshd[2239]: Connection closed by 10.200.16.10 port 46984 May 7 23:45:02.645772 sshd-session[2237]: pam_unix(sshd:session): session closed for user core May 7 23:45:02.648993 systemd[1]: sshd@3-10.200.20.32:22-10.200.16.10:46984.service: Deactivated successfully. May 7 23:45:02.650636 systemd[1]: session-6.scope: Deactivated successfully. May 7 23:45:02.651361 systemd-logind[1703]: Session 6 logged out. Waiting for processes to exit. May 7 23:45:02.652214 systemd-logind[1703]: Removed session 6. May 7 23:45:02.723582 systemd[1]: Started sshd@4-10.200.20.32:22-10.200.16.10:46992.service - OpenSSH per-connection server daemon (10.200.16.10:46992). May 7 23:45:03.140053 sshd[2245]: Accepted publickey for core from 10.200.16.10 port 46992 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:45:03.141361 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:03.145181 systemd-logind[1703]: New session 7 of user core. May 7 23:45:03.156212 systemd[1]: Started session-7.scope - Session 7 of User core. May 7 23:45:03.464319 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 7 23:45:03.464600 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:03.492783 sudo[2248]: pam_unix(sudo:session): session closed for user root May 7 23:45:03.558388 sshd[2247]: Connection closed by 10.200.16.10 port 46992 May 7 23:45:03.557538 sshd-session[2245]: pam_unix(sshd:session): session closed for user core May 7 23:45:03.561263 systemd[1]: sshd@4-10.200.20.32:22-10.200.16.10:46992.service: Deactivated successfully. May 7 23:45:03.562923 systemd[1]: session-7.scope: Deactivated successfully. May 7 23:45:03.563794 systemd-logind[1703]: Session 7 logged out. Waiting for processes to exit. May 7 23:45:03.564880 systemd-logind[1703]: Removed session 7. May 7 23:45:03.634465 systemd[1]: Started sshd@5-10.200.20.32:22-10.200.16.10:46998.service - OpenSSH per-connection server daemon (10.200.16.10:46998). May 7 23:45:04.064091 sshd[2254]: Accepted publickey for core from 10.200.16.10 port 46998 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:45:04.065359 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:04.070216 systemd-logind[1703]: New session 8 of user core. May 7 23:45:04.079263 systemd[1]: Started session-8.scope - Session 8 of User core. May 7 23:45:04.304175 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 7 23:45:04.304457 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:04.307784 sudo[2258]: pam_unix(sudo:session): session closed for user root May 7 23:45:04.312402 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 7 23:45:04.312664 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:04.329323 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:45:04.351058 augenrules[2280]: No rules May 7 23:45:04.352465 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:45:04.352680 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:45:04.354420 sudo[2257]: pam_unix(sudo:session): session closed for user root May 7 23:45:04.429357 sshd[2256]: Connection closed by 10.200.16.10 port 46998 May 7 23:45:04.429860 sshd-session[2254]: pam_unix(sshd:session): session closed for user core May 7 23:45:04.433291 systemd[1]: sshd@5-10.200.20.32:22-10.200.16.10:46998.service: Deactivated successfully. May 7 23:45:04.434748 systemd[1]: session-8.scope: Deactivated successfully. May 7 23:45:04.435389 systemd-logind[1703]: Session 8 logged out. Waiting for processes to exit. May 7 23:45:04.436424 systemd-logind[1703]: Removed session 8. May 7 23:45:04.504323 systemd[1]: Started sshd@6-10.200.20.32:22-10.200.16.10:47012.service - OpenSSH per-connection server daemon (10.200.16.10:47012). May 7 23:45:04.921959 sshd[2289]: Accepted publickey for core from 10.200.16.10 port 47012 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:45:04.923210 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:04.927203 systemd-logind[1703]: New session 9 of user core. May 7 23:45:04.938243 systemd[1]: Started session-9.scope - Session 9 of User core. May 7 23:45:05.158597 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 7 23:45:05.158866 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:05.623103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 7 23:45:05.631214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:05.986934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:05.990850 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:45:06.025486 kubelet[2312]: E0507 23:45:06.025415 2312 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:45:06.027777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:45:06.027911 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:45:06.028545 systemd[1]: kubelet.service: Consumed 116ms CPU time, 96.4M memory peak. May 7 23:45:06.624270 systemd[1]: Starting docker.service - Docker Application Container Engine... May 7 23:45:06.624421 (dockerd)[2325]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 7 23:45:06.782498 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 7 23:45:07.364325 dockerd[2325]: time="2025-05-07T23:45:07.364078910Z" level=info msg="Starting up" May 7 23:45:07.812876 dockerd[2325]: time="2025-05-07T23:45:07.812837065Z" level=info msg="Loading containers: start." May 7 23:45:08.018053 kernel: Initializing XFRM netlink socket May 7 23:45:08.121351 systemd-networkd[1428]: docker0: Link UP May 7 23:45:08.151042 dockerd[2325]: time="2025-05-07T23:45:08.150988011Z" level=info msg="Loading containers: done." May 7 23:45:08.161672 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3858351240-merged.mount: Deactivated successfully. May 7 23:45:08.173449 dockerd[2325]: time="2025-05-07T23:45:08.173411637Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 7 23:45:08.173540 dockerd[2325]: time="2025-05-07T23:45:08.173514237Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 7 23:45:08.173663 dockerd[2325]: time="2025-05-07T23:45:08.173638476Z" level=info msg="Daemon has completed initialization" May 7 23:45:08.229808 dockerd[2325]: time="2025-05-07T23:45:08.229734001Z" level=info msg="API listen on /run/docker.sock" May 7 23:45:08.230152 systemd[1]: Started docker.service - Docker Application Container Engine. May 7 23:45:08.306964 update_engine[1710]: I20250507 23:45:08.306496 1710 update_attempter.cc:509] Updating boot flags... May 7 23:45:08.381076 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2520) May 7 23:45:09.228358 containerd[1741]: time="2025-05-07T23:45:09.228319608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 7 23:45:10.006305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745007414.mount: Deactivated successfully. May 7 23:45:11.249072 containerd[1741]: time="2025-05-07T23:45:11.248677251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:11.252415 containerd[1741]: time="2025-05-07T23:45:11.252044289Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 7 23:45:11.257769 containerd[1741]: time="2025-05-07T23:45:11.257713485Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:11.262542 containerd[1741]: time="2025-05-07T23:45:11.262492922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:11.263813 containerd[1741]: time="2025-05-07T23:45:11.263524641Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.035167993s" May 7 23:45:11.263813 containerd[1741]: time="2025-05-07T23:45:11.263563601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 7 23:45:11.264566 containerd[1741]: time="2025-05-07T23:45:11.264403601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 7 23:45:12.566592 containerd[1741]: time="2025-05-07T23:45:12.566549433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:12.572305 containerd[1741]: time="2025-05-07T23:45:12.572243669Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 7 23:45:12.576190 containerd[1741]: time="2025-05-07T23:45:12.576144067Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:12.582994 containerd[1741]: time="2025-05-07T23:45:12.582955022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:12.584130 containerd[1741]: time="2025-05-07T23:45:12.583978701Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.31954482s" May 7 23:45:12.584130 containerd[1741]: time="2025-05-07T23:45:12.584016901Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 7 23:45:12.584839 containerd[1741]: time="2025-05-07T23:45:12.584623741Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 7 23:45:13.697046 containerd[1741]: time="2025-05-07T23:45:13.696979583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:13.699767 containerd[1741]: time="2025-05-07T23:45:13.699544421Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 7 23:45:13.706564 containerd[1741]: time="2025-05-07T23:45:13.706518496Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:13.713204 containerd[1741]: time="2025-05-07T23:45:13.713129692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:13.714267 containerd[1741]: time="2025-05-07T23:45:13.714208251Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.12955379s" May 7 23:45:13.714267 containerd[1741]: time="2025-05-07T23:45:13.714264171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 7 23:45:13.715062 containerd[1741]: time="2025-05-07T23:45:13.714719410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 7 23:45:14.924466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560871219.mount: Deactivated successfully. May 7 23:45:15.270063 containerd[1741]: time="2025-05-07T23:45:15.269306791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:15.271970 containerd[1741]: time="2025-05-07T23:45:15.271926949Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 7 23:45:15.275129 containerd[1741]: time="2025-05-07T23:45:15.275085027Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:15.279155 containerd[1741]: time="2025-05-07T23:45:15.279112624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:15.279993 containerd[1741]: time="2025-05-07T23:45:15.279866624Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.565110974s" May 7 23:45:15.279993 containerd[1741]: time="2025-05-07T23:45:15.279896264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 7 23:45:15.280965 containerd[1741]: time="2025-05-07T23:45:15.280740823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 7 23:45:15.971102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004110236.mount: Deactivated successfully. May 7 23:45:16.123023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 7 23:45:16.128241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:16.217756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:16.221843 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:45:16.257177 kubelet[2656]: E0507 23:45:16.257076 2656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:45:16.259274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:45:16.259424 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:45:16.259905 systemd[1]: kubelet.service: Consumed 121ms CPU time, 94.1M memory peak. May 7 23:45:17.829643 containerd[1741]: time="2025-05-07T23:45:17.829597686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:17.880794 containerd[1741]: time="2025-05-07T23:45:17.880726291Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 7 23:45:17.885920 containerd[1741]: time="2025-05-07T23:45:17.885873927Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:18.420893 containerd[1741]: time="2025-05-07T23:45:18.420782363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:18.421777 containerd[1741]: time="2025-05-07T23:45:18.421555482Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 3.140784619s" May 7 23:45:18.421777 containerd[1741]: time="2025-05-07T23:45:18.421586642Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 7 23:45:18.422143 containerd[1741]: time="2025-05-07T23:45:18.422114322Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 7 23:45:19.028407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923358696.mount: Deactivated successfully. May 7 23:45:19.057078 containerd[1741]: time="2025-05-07T23:45:19.056814364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:19.061103 containerd[1741]: time="2025-05-07T23:45:19.061049481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 7 23:45:19.066323 containerd[1741]: time="2025-05-07T23:45:19.066276676Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:19.071545 containerd[1741]: time="2025-05-07T23:45:19.071490752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:19.072623 containerd[1741]: time="2025-05-07T23:45:19.072187951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 650.040869ms" May 7 23:45:19.072623 containerd[1741]: time="2025-05-07T23:45:19.072219271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 7 23:45:19.072831 containerd[1741]: time="2025-05-07T23:45:19.072798511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 7 23:45:19.873492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831353756.mount: Deactivated successfully. May 7 23:45:21.592022 containerd[1741]: time="2025-05-07T23:45:21.591289474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:21.593796 containerd[1741]: time="2025-05-07T23:45:21.593740392Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 7 23:45:21.597137 containerd[1741]: time="2025-05-07T23:45:21.597104229Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:21.604297 containerd[1741]: time="2025-05-07T23:45:21.604255783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:21.605613 containerd[1741]: time="2025-05-07T23:45:21.605568582Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.532734151s" May 7 23:45:21.605613 containerd[1741]: time="2025-05-07T23:45:21.605607102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 7 23:45:26.373073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 7 23:45:26.379353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:26.543240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:26.547838 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:45:26.593482 kubelet[2783]: E0507 23:45:26.590995 2783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:45:26.594014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:45:26.594163 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:45:26.594443 systemd[1]: kubelet.service: Consumed 123ms CPU time, 92.7M memory peak. May 7 23:45:26.839628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:26.839861 systemd[1]: kubelet.service: Consumed 123ms CPU time, 92.7M memory peak. May 7 23:45:26.845293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:26.881211 systemd[1]: Reload requested from client PID 2797 ('systemctl') (unit session-9.scope)... May 7 23:45:26.881229 systemd[1]: Reloading... May 7 23:45:27.007107 zram_generator::config[2859]: No configuration found. May 7 23:45:27.085124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:45:27.189408 systemd[1]: Reloading finished in 307 ms. May 7 23:45:27.233262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:27.245494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:27.247141 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:45:27.247370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:27.247438 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.3M memory peak. May 7 23:45:27.253283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:27.366253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:27.381549 (kubelet)[2914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:45:27.420349 kubelet[2914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:27.422195 kubelet[2914]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 7 23:45:27.422195 kubelet[2914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:27.422195 kubelet[2914]: I0507 23:45:27.420760 2914 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:45:28.144901 kubelet[2914]: I0507 23:45:28.144863 2914 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 7 23:45:28.145077 kubelet[2914]: I0507 23:45:28.145066 2914 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:45:28.145378 kubelet[2914]: I0507 23:45:28.145365 2914 server.go:929] "Client rotation is on, will bootstrap in background" May 7 23:45:28.165287 kubelet[2914]: E0507 23:45:28.165241 2914 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:28.166284 kubelet[2914]: I0507 23:45:28.166263 2914 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:45:28.173959 kubelet[2914]: E0507 23:45:28.173810 2914 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 7 23:45:28.173959 kubelet[2914]: I0507 23:45:28.173846 2914 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 7 23:45:28.177960 kubelet[2914]: I0507 23:45:28.177775 2914 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:45:28.179065 kubelet[2914]: I0507 23:45:28.178484 2914 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 7 23:45:28.179065 kubelet[2914]: I0507 23:45:28.178637 2914 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:45:28.179065 kubelet[2914]: I0507 23:45:28.178667 2914 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-afbb805c8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 7 23:45:28.179065 kubelet[2914]: I0507 23:45:28.178850 2914 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:45:28.179267 kubelet[2914]: I0507 23:45:28.178858 2914 container_manager_linux.go:300] "Creating device plugin manager" May 7 23:45:28.179267 kubelet[2914]: I0507 23:45:28.178975 2914 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:28.181242 kubelet[2914]: I0507 23:45:28.181217 2914 kubelet.go:408] "Attempting to sync node with API server" May 7 23:45:28.181350 kubelet[2914]: I0507 23:45:28.181339 2914 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:45:28.181420 kubelet[2914]: I0507 23:45:28.181412 2914 kubelet.go:314] "Adding apiserver pod source" May 7 23:45:28.181472 kubelet[2914]: I0507 23:45:28.181464 2914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:45:28.182521 kubelet[2914]: W0507 23:45:28.182469 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-afbb805c8a&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:28.182592 kubelet[2914]: E0507 23:45:28.182536 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-afbb805c8a&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:28.183508 kubelet[2914]: I0507 23:45:28.183371 2914 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:45:28.185709 kubelet[2914]: W0507 23:45:28.185415 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:28.185709 kubelet[2914]: E0507 23:45:28.185479 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:28.185709 kubelet[2914]: I0507 23:45:28.185644 2914 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:45:28.186326 kubelet[2914]: W0507 23:45:28.186308 2914 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 7 23:45:28.190288 kubelet[2914]: I0507 23:45:28.190248 2914 server.go:1269] "Started kubelet" May 7 23:45:28.192529 kubelet[2914]: I0507 23:45:28.192059 2914 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:45:28.195968 kubelet[2914]: I0507 23:45:28.194808 2914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:45:28.195968 kubelet[2914]: I0507 23:45:28.195187 2914 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:45:28.196498 kubelet[2914]: I0507 23:45:28.196473 2914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:45:28.197196 kubelet[2914]: E0507 23:45:28.195695 2914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-afbb805c8a.183d6367da9e5601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-afbb805c8a,UID:ci-4230.1.1-n-afbb805c8a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-afbb805c8a,},FirstTimestamp:2025-05-07 23:45:28.190211585 +0000 UTC m=+0.805322603,LastTimestamp:2025-05-07 23:45:28.190211585 +0000 UTC m=+0.805322603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-afbb805c8a,}" May 7 23:45:28.198740 kubelet[2914]: I0507 23:45:28.198312 2914 server.go:460] "Adding debug handlers to kubelet server" May 7 23:45:28.199832 kubelet[2914]: I0507 23:45:28.199806 2914 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 7 23:45:28.201173 kubelet[2914]: I0507 23:45:28.201150 2914 volume_manager.go:289] "Starting Kubelet Volume Manager" May 7 23:45:28.201431 kubelet[2914]: E0507 23:45:28.201405 2914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-afbb805c8a\" not found" May 7 23:45:28.202903 kubelet[2914]: E0507 23:45:28.202874 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-afbb805c8a?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="200ms" May 7 23:45:28.205418 kubelet[2914]: I0507 23:45:28.205397 2914 reconciler.go:26] "Reconciler: start to sync state" May 7 23:45:28.205571 kubelet[2914]: I0507 23:45:28.205560 2914 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 7 23:45:28.206022 kubelet[2914]: W0507 23:45:28.205980 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:28.206181 kubelet[2914]: E0507 23:45:28.206162 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:28.206351 kubelet[2914]: E0507 23:45:28.206337 2914 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:45:28.206573 kubelet[2914]: I0507 23:45:28.206558 2914 factory.go:221] Registration of the containerd container factory successfully May 7 23:45:28.206639 kubelet[2914]: I0507 23:45:28.206630 2914 factory.go:221] Registration of the systemd container factory successfully May 7 23:45:28.206772 kubelet[2914]: I0507 23:45:28.206757 2914 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:45:28.226575 kubelet[2914]: I0507 23:45:28.226547 2914 cpu_manager.go:214] "Starting CPU manager" policy="none" May 7 23:45:28.226756 kubelet[2914]: I0507 23:45:28.226745 2914 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 7 23:45:28.226812 kubelet[2914]: I0507 23:45:28.226804 2914 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:28.234436 kubelet[2914]: I0507 23:45:28.234407 2914 policy_none.go:49] "None policy: Start" May 7 23:45:28.235376 kubelet[2914]: I0507 23:45:28.235352 2914 memory_manager.go:170] "Starting memorymanager" policy="None" May 7 23:45:28.235376 kubelet[2914]: I0507 23:45:28.235382 2914 state_mem.go:35] "Initializing new in-memory state store" May 7 23:45:28.247205 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 7 23:45:28.251651 kubelet[2914]: I0507 23:45:28.251586 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:45:28.252722 kubelet[2914]: I0507 23:45:28.252671 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:45:28.252722 kubelet[2914]: I0507 23:45:28.252706 2914 status_manager.go:217] "Starting to sync pod status with apiserver" May 7 23:45:28.252722 kubelet[2914]: I0507 23:45:28.252726 2914 kubelet.go:2321] "Starting kubelet main sync loop" May 7 23:45:28.252852 kubelet[2914]: E0507 23:45:28.252768 2914 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:45:28.258893 kubelet[2914]: W0507 23:45:28.258830 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:28.258893 kubelet[2914]: E0507 23:45:28.258895 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:28.260840 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 7 23:45:28.265039 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 7 23:45:28.273065 kubelet[2914]: I0507 23:45:28.272893 2914 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:45:28.273185 kubelet[2914]: I0507 23:45:28.273127 2914 eviction_manager.go:189] "Eviction manager: starting control loop" May 7 23:45:28.273211 kubelet[2914]: I0507 23:45:28.273139 2914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:45:28.274497 kubelet[2914]: I0507 23:45:28.273787 2914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:45:28.275845 kubelet[2914]: E0507 23:45:28.275777 2914 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-afbb805c8a\" not found" May 7 23:45:28.361715 systemd[1]: Created slice kubepods-burstable-pod63e55f4e65487365fd7a52900876b4a0.slice - libcontainer container kubepods-burstable-pod63e55f4e65487365fd7a52900876b4a0.slice. May 7 23:45:28.375554 kubelet[2914]: I0507 23:45:28.375514 2914 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.375887 kubelet[2914]: E0507 23:45:28.375858 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.381296 systemd[1]: Created slice kubepods-burstable-pod9297c12043ec6d0edd092f3d40df06b9.slice - libcontainer container kubepods-burstable-pod9297c12043ec6d0edd092f3d40df06b9.slice. May 7 23:45:28.385872 systemd[1]: Created slice kubepods-burstable-pod2daa84d5d77aa92819d5fb2092601cbd.slice - libcontainer container kubepods-burstable-pod2daa84d5d77aa92819d5fb2092601cbd.slice. May 7 23:45:28.403578 kubelet[2914]: E0507 23:45:28.403461 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-afbb805c8a?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="400ms" May 7 23:45:28.506877 kubelet[2914]: I0507 23:45:28.506831 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63e55f4e65487365fd7a52900876b4a0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-afbb805c8a\" (UID: \"63e55f4e65487365fd7a52900876b4a0\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.506877 kubelet[2914]: I0507 23:45:28.506879 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.507318 kubelet[2914]: I0507 23:45:28.506898 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.507318 kubelet[2914]: I0507 23:45:28.506916 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2daa84d5d77aa92819d5fb2092601cbd-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-afbb805c8a\" (UID: \"2daa84d5d77aa92819d5fb2092601cbd\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.507318 kubelet[2914]: I0507 23:45:28.506933 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63e55f4e65487365fd7a52900876b4a0-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-afbb805c8a\" (UID: \"63e55f4e65487365fd7a52900876b4a0\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.507318 kubelet[2914]: I0507 23:45:28.506947 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.507318 kubelet[2914]: I0507 23:45:28.506961 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.507428 kubelet[2914]: I0507 23:45:28.506975 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.507428 kubelet[2914]: I0507 23:45:28.506989 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63e55f4e65487365fd7a52900876b4a0-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-afbb805c8a\" (UID: \"63e55f4e65487365fd7a52900876b4a0\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.578625 kubelet[2914]: I0507 23:45:28.578602 2914 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.579000 kubelet[2914]: E0507 23:45:28.578961 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.680342 containerd[1741]: time="2025-05-07T23:45:28.680237645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-afbb805c8a,Uid:63e55f4e65487365fd7a52900876b4a0,Namespace:kube-system,Attempt:0,}" May 7 23:45:28.685065 containerd[1741]: time="2025-05-07T23:45:28.685008921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-afbb805c8a,Uid:9297c12043ec6d0edd092f3d40df06b9,Namespace:kube-system,Attempt:0,}" May 7 23:45:28.690741 containerd[1741]: time="2025-05-07T23:45:28.690443238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-afbb805c8a,Uid:2daa84d5d77aa92819d5fb2092601cbd,Namespace:kube-system,Attempt:0,}" May 7 23:45:28.804587 kubelet[2914]: E0507 23:45:28.804542 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-afbb805c8a?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="800ms" May 7 23:45:28.980783 kubelet[2914]: I0507 23:45:28.980686 2914 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:28.981176 kubelet[2914]: E0507 23:45:28.981145 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:29.030939 kubelet[2914]: W0507 23:45:29.030875 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:29.584163 kubelet[2914]: E0507 23:45:29.030947 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:29.584163 kubelet[2914]: W0507 23:45:29.092658 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-afbb805c8a&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:29.584163 kubelet[2914]: E0507 23:45:29.092708 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-afbb805c8a&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:29.584163 kubelet[2914]: W0507 23:45:29.277919 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:29.584163 kubelet[2914]: E0507 23:45:29.277957 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:29.605707 kubelet[2914]: E0507 23:45:29.605662 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-afbb805c8a?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="1.6s" May 7 23:45:29.736010 kubelet[2914]: W0507 23:45:29.735963 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused May 7 23:45:29.736010 kubelet[2914]: E0507 23:45:29.736014 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:29.783009 kubelet[2914]: I0507 23:45:29.782640 2914 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:29.783009 kubelet[2914]: E0507 23:45:29.782979 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:30.083881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103329875.mount: Deactivated successfully. May 7 23:45:30.120049 containerd[1741]: time="2025-05-07T23:45:30.119402526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:30.128341 containerd[1741]: time="2025-05-07T23:45:30.128285600Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 7 23:45:30.139836 containerd[1741]: time="2025-05-07T23:45:30.139791632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:30.145061 containerd[1741]: time="2025-05-07T23:45:30.144874389Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:30.151564 containerd[1741]: time="2025-05-07T23:45:30.151521904Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:30.153558 containerd[1741]: time="2025-05-07T23:45:30.153513943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:45:30.156292 containerd[1741]: time="2025-05-07T23:45:30.156259981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:45:30.158807 containerd[1741]: time="2025-05-07T23:45:30.158765379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:30.159821 containerd[1741]: time="2025-05-07T23:45:30.159569258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.479249733s" May 7 23:45:30.163069 containerd[1741]: time="2025-05-07T23:45:30.162977936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.472460698s" May 7 23:45:30.164342 containerd[1741]: time="2025-05-07T23:45:30.164312735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.479199774s" May 7 23:45:30.272575 kubelet[2914]: E0507 23:45:30.272529 2914 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" May 7 23:45:30.780200 containerd[1741]: time="2025-05-07T23:45:30.779904348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:30.780612 containerd[1741]: time="2025-05-07T23:45:30.780455188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:30.780612 containerd[1741]: time="2025-05-07T23:45:30.780483388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:30.781065 containerd[1741]: time="2025-05-07T23:45:30.780948267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:30.785121 containerd[1741]: time="2025-05-07T23:45:30.784602985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:30.785121 containerd[1741]: time="2025-05-07T23:45:30.784721625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:30.785121 containerd[1741]: time="2025-05-07T23:45:30.784735905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:30.785121 containerd[1741]: time="2025-05-07T23:45:30.784846425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:30.786361 containerd[1741]: time="2025-05-07T23:45:30.786142424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:30.786361 containerd[1741]: time="2025-05-07T23:45:30.786254824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:30.786361 containerd[1741]: time="2025-05-07T23:45:30.786268344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:30.786526 containerd[1741]: time="2025-05-07T23:45:30.786368864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:30.804197 systemd[1]: Started cri-containerd-01293fe08d09328e62cdc5444e44a2d6b8a60dbf54710a55c9ddaf00efdfdafc.scope - libcontainer container 01293fe08d09328e62cdc5444e44a2d6b8a60dbf54710a55c9ddaf00efdfdafc. May 7 23:45:30.817412 systemd[1]: Started cri-containerd-0a6a3c453b152bf50756a561202ee4b349c4e9304aaa3a1cad5ee5044d6daefd.scope - libcontainer container 0a6a3c453b152bf50756a561202ee4b349c4e9304aaa3a1cad5ee5044d6daefd. May 7 23:45:30.823117 systemd[1]: Started cri-containerd-4890deb985106cfbcd21c24d945e24757ddb79f9d7045e89cf55f65f08110494.scope - libcontainer container 4890deb985106cfbcd21c24d945e24757ddb79f9d7045e89cf55f65f08110494. May 7 23:45:30.859381 containerd[1741]: time="2025-05-07T23:45:30.859336853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-afbb805c8a,Uid:63e55f4e65487365fd7a52900876b4a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"01293fe08d09328e62cdc5444e44a2d6b8a60dbf54710a55c9ddaf00efdfdafc\"" May 7 23:45:30.866750 containerd[1741]: time="2025-05-07T23:45:30.866360928Z" level=info msg="CreateContainer within sandbox \"01293fe08d09328e62cdc5444e44a2d6b8a60dbf54710a55c9ddaf00efdfdafc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 7 23:45:30.879324 containerd[1741]: time="2025-05-07T23:45:30.879221799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-afbb805c8a,Uid:9297c12043ec6d0edd092f3d40df06b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6a3c453b152bf50756a561202ee4b349c4e9304aaa3a1cad5ee5044d6daefd\"" May 7 23:45:30.883837 containerd[1741]: time="2025-05-07T23:45:30.883764836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-afbb805c8a,Uid:2daa84d5d77aa92819d5fb2092601cbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4890deb985106cfbcd21c24d945e24757ddb79f9d7045e89cf55f65f08110494\"" May 7 23:45:30.886878 containerd[1741]: time="2025-05-07T23:45:30.886828754Z" level=info msg="CreateContainer within sandbox \"0a6a3c453b152bf50756a561202ee4b349c4e9304aaa3a1cad5ee5044d6daefd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 7 23:45:30.888339 containerd[1741]: time="2025-05-07T23:45:30.888173073Z" level=info msg="CreateContainer within sandbox \"4890deb985106cfbcd21c24d945e24757ddb79f9d7045e89cf55f65f08110494\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 7 23:45:30.966486 containerd[1741]: time="2025-05-07T23:45:30.966439299Z" level=info msg="CreateContainer within sandbox \"01293fe08d09328e62cdc5444e44a2d6b8a60dbf54710a55c9ddaf00efdfdafc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dff2c699d1cbbc3bfc4bc044801d3b794c4b0b8c9ad831a7ef9a37b1bcaa1355\"" May 7 23:45:30.967283 containerd[1741]: time="2025-05-07T23:45:30.967254138Z" level=info msg="StartContainer for \"dff2c699d1cbbc3bfc4bc044801d3b794c4b0b8c9ad831a7ef9a37b1bcaa1355\"" May 7 23:45:30.979237 containerd[1741]: time="2025-05-07T23:45:30.979194530Z" level=info msg="CreateContainer within sandbox \"0a6a3c453b152bf50756a561202ee4b349c4e9304aaa3a1cad5ee5044d6daefd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ee16403a79107bed8f26210cac58ec6bb24c2ef1854c0ad088e18a46a874ad0\"" May 7 23:45:30.985357 containerd[1741]: time="2025-05-07T23:45:30.984599126Z" level=info msg="CreateContainer within sandbox \"4890deb985106cfbcd21c24d945e24757ddb79f9d7045e89cf55f65f08110494\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68fed9d85d6c0e638cf8a1f1671dd7c75f379de5aaf8cf6c64b27ffe944f2612\"" May 7 23:45:30.985357 containerd[1741]: time="2025-05-07T23:45:30.984641606Z" level=info msg="StartContainer for \"3ee16403a79107bed8f26210cac58ec6bb24c2ef1854c0ad088e18a46a874ad0\"" May 7 23:45:30.985357 containerd[1741]: time="2025-05-07T23:45:30.985022806Z" level=info msg="StartContainer for \"68fed9d85d6c0e638cf8a1f1671dd7c75f379de5aaf8cf6c64b27ffe944f2612\"" May 7 23:45:30.991218 systemd[1]: Started cri-containerd-dff2c699d1cbbc3bfc4bc044801d3b794c4b0b8c9ad831a7ef9a37b1bcaa1355.scope - libcontainer container dff2c699d1cbbc3bfc4bc044801d3b794c4b0b8c9ad831a7ef9a37b1bcaa1355. May 7 23:45:31.019248 systemd[1]: Started cri-containerd-68fed9d85d6c0e638cf8a1f1671dd7c75f379de5aaf8cf6c64b27ffe944f2612.scope - libcontainer container 68fed9d85d6c0e638cf8a1f1671dd7c75f379de5aaf8cf6c64b27ffe944f2612. May 7 23:45:31.023367 systemd[1]: Started cri-containerd-3ee16403a79107bed8f26210cac58ec6bb24c2ef1854c0ad088e18a46a874ad0.scope - libcontainer container 3ee16403a79107bed8f26210cac58ec6bb24c2ef1854c0ad088e18a46a874ad0. May 7 23:45:31.052119 containerd[1741]: time="2025-05-07T23:45:31.051548440Z" level=info msg="StartContainer for \"dff2c699d1cbbc3bfc4bc044801d3b794c4b0b8c9ad831a7ef9a37b1bcaa1355\" returns successfully" May 7 23:45:31.083226 containerd[1741]: time="2025-05-07T23:45:31.080918459Z" level=info msg="StartContainer for \"3ee16403a79107bed8f26210cac58ec6bb24c2ef1854c0ad088e18a46a874ad0\" returns successfully" May 7 23:45:31.110040 containerd[1741]: time="2025-05-07T23:45:31.109981039Z" level=info msg="StartContainer for \"68fed9d85d6c0e638cf8a1f1671dd7c75f379de5aaf8cf6c64b27ffe944f2612\" returns successfully" May 7 23:45:31.384724 kubelet[2914]: I0507 23:45:31.384699 2914 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:33.631769 kubelet[2914]: E0507 23:45:33.631721 2914 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-afbb805c8a\" not found" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:33.780637 kubelet[2914]: I0507 23:45:33.780588 2914 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:34.191452 kubelet[2914]: I0507 23:45:34.191202 2914 apiserver.go:52] "Watching apiserver" May 7 23:45:34.206715 kubelet[2914]: I0507 23:45:34.206653 2914 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 7 23:45:34.275962 kubelet[2914]: E0507 23:45:34.275760 2914 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:35.811907 systemd[1]: Reload requested from client PID 3187 ('systemctl') (unit session-9.scope)... May 7 23:45:35.812069 systemd[1]: Reloading... May 7 23:45:35.901077 zram_generator::config[3231]: No configuration found. May 7 23:45:36.015919 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:45:36.134321 systemd[1]: Reloading finished in 321 ms. May 7 23:45:36.158998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:36.176022 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:45:36.176302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:36.176364 systemd[1]: kubelet.service: Consumed 1.164s CPU time, 117M memory peak. May 7 23:45:36.181388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:36.362368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:36.373638 (kubelet)[3298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:45:36.423326 kubelet[3298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:36.423326 kubelet[3298]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 7 23:45:36.423326 kubelet[3298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:36.423326 kubelet[3298]: I0507 23:45:36.423060 3298 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:45:36.430053 kubelet[3298]: I0507 23:45:36.429541 3298 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 7 23:45:36.430053 kubelet[3298]: I0507 23:45:36.429568 3298 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:45:36.430053 kubelet[3298]: I0507 23:45:36.429805 3298 server.go:929] "Client rotation is on, will bootstrap in background" May 7 23:45:36.431383 kubelet[3298]: I0507 23:45:36.431361 3298 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 7 23:45:36.436786 kubelet[3298]: I0507 23:45:36.436764 3298 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:45:36.443570 kubelet[3298]: E0507 23:45:36.443524 3298 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 7 23:45:36.443740 kubelet[3298]: I0507 23:45:36.443728 3298 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 7 23:45:36.449061 kubelet[3298]: I0507 23:45:36.446902 3298 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:45:36.449549 kubelet[3298]: I0507 23:45:36.449493 3298 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 7 23:45:36.450393 kubelet[3298]: I0507 23:45:36.450312 3298 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:45:36.451258 kubelet[3298]: I0507 23:45:36.450754 3298 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-afbb805c8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 7 23:45:36.451452 kubelet[3298]: I0507 23:45:36.451437 3298 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:45:36.451801 kubelet[3298]: I0507 23:45:36.451788 3298 container_manager_linux.go:300] "Creating device plugin manager" May 7 23:45:36.451961 kubelet[3298]: I0507 23:45:36.451900 3298 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:36.452536 kubelet[3298]: I0507 23:45:36.452416 3298 kubelet.go:408] "Attempting to sync node with API server" May 7 23:45:36.452536 kubelet[3298]: I0507 23:45:36.452438 3298 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:45:36.457076 kubelet[3298]: I0507 23:45:36.455119 3298 kubelet.go:314] "Adding apiserver pod source" May 7 23:45:36.457076 kubelet[3298]: I0507 23:45:36.455143 3298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:45:36.457832 kubelet[3298]: I0507 23:45:36.457796 3298 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:45:36.461038 kubelet[3298]: I0507 23:45:36.458941 3298 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:45:36.461649 kubelet[3298]: I0507 23:45:36.461633 3298 server.go:1269] "Started kubelet" May 7 23:45:36.466835 kubelet[3298]: I0507 23:45:36.464969 3298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:45:36.468876 kubelet[3298]: I0507 23:45:36.466949 3298 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:45:36.473711 kubelet[3298]: I0507 23:45:36.473688 3298 server.go:460] "Adding debug handlers to kubelet server" May 7 23:45:36.474518 kubelet[3298]: I0507 23:45:36.467313 3298 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 7 23:45:36.474730 kubelet[3298]: I0507 23:45:36.474712 3298 volume_manager.go:289] "Starting Kubelet Volume Manager" May 7 23:45:36.474946 kubelet[3298]: E0507 23:45:36.474925 3298 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-afbb805c8a\" not found" May 7 23:45:36.475312 kubelet[3298]: I0507 23:45:36.467042 3298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:45:36.475432 kubelet[3298]: I0507 23:45:36.475407 3298 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:45:36.475689 kubelet[3298]: I0507 23:45:36.475674 3298 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 7 23:45:36.475825 kubelet[3298]: I0507 23:45:36.475806 3298 reconciler.go:26] "Reconciler: start to sync state" May 7 23:45:36.484798 kubelet[3298]: I0507 23:45:36.484764 3298 factory.go:221] Registration of the systemd container factory successfully May 7 23:45:36.484921 kubelet[3298]: I0507 23:45:36.484897 3298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:45:36.497454 kubelet[3298]: I0507 23:45:36.497417 3298 factory.go:221] Registration of the containerd container factory successfully May 7 23:45:36.499263 kubelet[3298]: I0507 23:45:36.499127 3298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:45:36.510076 kubelet[3298]: I0507 23:45:36.508899 3298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:45:36.510076 kubelet[3298]: I0507 23:45:36.508935 3298 status_manager.go:217] "Starting to sync pod status with apiserver" May 7 23:45:36.510076 kubelet[3298]: I0507 23:45:36.508952 3298 kubelet.go:2321] "Starting kubelet main sync loop" May 7 23:45:36.510076 kubelet[3298]: E0507 23:45:36.509002 3298 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:45:36.553213 kubelet[3298]: I0507 23:45:36.553185 3298 cpu_manager.go:214] "Starting CPU manager" policy="none" May 7 23:45:36.553213 kubelet[3298]: I0507 23:45:36.553204 3298 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 7 23:45:36.553213 kubelet[3298]: I0507 23:45:36.553224 3298 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:36.553397 kubelet[3298]: I0507 23:45:36.553377 3298 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 7 23:45:36.553426 kubelet[3298]: I0507 23:45:36.553393 3298 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 7 23:45:36.553426 kubelet[3298]: I0507 23:45:36.553412 3298 policy_none.go:49] "None policy: Start" May 7 23:45:36.554012 kubelet[3298]: I0507 23:45:36.553984 3298 memory_manager.go:170] "Starting memorymanager" policy="None" May 7 23:45:36.554012 kubelet[3298]: I0507 23:45:36.554010 3298 state_mem.go:35] "Initializing new in-memory state store" May 7 23:45:36.554201 kubelet[3298]: I0507 23:45:36.554180 3298 state_mem.go:75] "Updated machine memory state" May 7 23:45:36.557958 kubelet[3298]: I0507 23:45:36.557935 3298 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:45:36.558448 kubelet[3298]: I0507 23:45:36.558110 3298 eviction_manager.go:189] "Eviction manager: starting control loop" May 7 23:45:36.558448 kubelet[3298]: I0507 23:45:36.558128 3298 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:45:36.558448 kubelet[3298]: I0507 23:45:36.558302 3298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:45:36.613730 kubelet[3298]: W0507 23:45:36.613686 3298 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 7 23:45:36.620056 kubelet[3298]: W0507 23:45:36.618785 3298 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 7 23:45:36.620056 kubelet[3298]: W0507 23:45:36.618943 3298 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 7 23:45:36.661516 kubelet[3298]: I0507 23:45:36.661480 3298 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.673208 kubelet[3298]: I0507 23:45:36.673114 3298 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.673826 kubelet[3298]: I0507 23:45:36.673759 3298 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777065 kubelet[3298]: I0507 23:45:36.776948 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63e55f4e65487365fd7a52900876b4a0-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-afbb805c8a\" (UID: \"63e55f4e65487365fd7a52900876b4a0\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777065 kubelet[3298]: I0507 23:45:36.776989 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63e55f4e65487365fd7a52900876b4a0-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-afbb805c8a\" (UID: \"63e55f4e65487365fd7a52900876b4a0\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777065 kubelet[3298]: I0507 23:45:36.777014 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777065 kubelet[3298]: I0507 23:45:36.777051 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777065 kubelet[3298]: I0507 23:45:36.777075 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777296 kubelet[3298]: I0507 23:45:36.777090 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777296 kubelet[3298]: I0507 23:45:36.777106 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9297c12043ec6d0edd092f3d40df06b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-afbb805c8a\" (UID: \"9297c12043ec6d0edd092f3d40df06b9\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777296 kubelet[3298]: I0507 23:45:36.777124 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2daa84d5d77aa92819d5fb2092601cbd-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-afbb805c8a\" (UID: \"2daa84d5d77aa92819d5fb2092601cbd\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.777296 kubelet[3298]: I0507 23:45:36.777142 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63e55f4e65487365fd7a52900876b4a0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-afbb805c8a\" (UID: \"63e55f4e65487365fd7a52900876b4a0\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" May 7 23:45:36.827760 sudo[3328]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 7 23:45:36.828059 sudo[3328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 7 23:45:37.280479 sudo[3328]: pam_unix(sudo:session): session closed for user root May 7 23:45:37.456108 kubelet[3298]: I0507 23:45:37.455858 3298 apiserver.go:52] "Watching apiserver" May 7 23:45:37.476081 kubelet[3298]: I0507 23:45:37.476015 3298 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 7 23:45:37.553364 kubelet[3298]: W0507 23:45:37.552639 3298 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 7 23:45:37.553364 kubelet[3298]: E0507 23:45:37.552705 3298 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-afbb805c8a\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" May 7 23:45:37.579430 kubelet[3298]: I0507 23:45:37.579062 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-afbb805c8a" podStartSLOduration=1.579047004 podStartE2EDuration="1.579047004s" podCreationTimestamp="2025-05-07 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:45:37.578492244 +0000 UTC m=+1.200926197" watchObservedRunningTime="2025-05-07 23:45:37.579047004 +0000 UTC m=+1.201480957" May 7 23:45:37.580139 kubelet[3298]: I0507 23:45:37.579841 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-afbb805c8a" podStartSLOduration=1.5798308030000001 podStartE2EDuration="1.579830803s" podCreationTimestamp="2025-05-07 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:45:37.562851295 +0000 UTC m=+1.185285248" watchObservedRunningTime="2025-05-07 23:45:37.579830803 +0000 UTC m=+1.202264756" May 7 23:45:37.606049 kubelet[3298]: I0507 23:45:37.605812 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-afbb805c8a" podStartSLOduration=1.605793905 podStartE2EDuration="1.605793905s" podCreationTimestamp="2025-05-07 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:45:37.592770914 +0000 UTC m=+1.215204867" watchObservedRunningTime="2025-05-07 23:45:37.605793905 +0000 UTC m=+1.228227858" May 7 23:45:38.817816 sudo[2292]: pam_unix(sudo:session): session closed for user root May 7 23:45:38.881717 sshd[2291]: Connection closed by 10.200.16.10 port 47012 May 7 23:45:38.882304 sshd-session[2289]: pam_unix(sshd:session): session closed for user core May 7 23:45:38.885717 systemd[1]: sshd@6-10.200.20.32:22-10.200.16.10:47012.service: Deactivated successfully. May 7 23:45:38.887464 systemd[1]: session-9.scope: Deactivated successfully. May 7 23:45:38.887638 systemd[1]: session-9.scope: Consumed 6.488s CPU time, 255.7M memory peak. May 7 23:45:38.888762 systemd-logind[1703]: Session 9 logged out. Waiting for processes to exit. May 7 23:45:38.889976 systemd-logind[1703]: Removed session 9. May 7 23:45:41.331885 kubelet[3298]: I0507 23:45:41.331800 3298 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 7 23:45:41.334054 containerd[1741]: time="2025-05-07T23:45:41.333584485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 7 23:45:41.334711 kubelet[3298]: I0507 23:45:41.333822 3298 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 7 23:45:42.021192 systemd[1]: Created slice kubepods-besteffort-podc1f174d4_9e67_4370_a73c_d3e9a875c7f0.slice - libcontainer container kubepods-besteffort-podc1f174d4_9e67_4370_a73c_d3e9a875c7f0.slice. May 7 23:45:42.034433 systemd[1]: Created slice kubepods-burstable-pod1542033d_985a_404b_aab0_bbc36d1e1a2e.slice - libcontainer container kubepods-burstable-pod1542033d_985a_404b_aab0_bbc36d1e1a2e.slice. May 7 23:45:42.109264 kubelet[3298]: I0507 23:45:42.109226 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-xtables-lock\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109264 kubelet[3298]: I0507 23:45:42.109265 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-kernel\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109430 kubelet[3298]: I0507 23:45:42.109283 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1542033d-985a-404b-aab0-bbc36d1e1a2e-clustermesh-secrets\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109430 kubelet[3298]: I0507 23:45:42.109297 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-config-path\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109430 kubelet[3298]: I0507 23:45:42.109312 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-cgroup\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109430 kubelet[3298]: I0507 23:45:42.109325 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cni-path\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109430 kubelet[3298]: I0507 23:45:42.109341 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-etc-cni-netd\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109430 kubelet[3298]: I0507 23:45:42.109356 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-run\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109560 kubelet[3298]: I0507 23:45:42.109371 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f174d4-9e67-4370-a73c-d3e9a875c7f0-lib-modules\") pod \"kube-proxy-2hhwg\" (UID: \"c1f174d4-9e67-4370-a73c-d3e9a875c7f0\") " pod="kube-system/kube-proxy-2hhwg" May 7 23:45:42.109560 kubelet[3298]: I0507 23:45:42.109386 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-hubble-tls\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109560 kubelet[3298]: I0507 23:45:42.109402 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1f174d4-9e67-4370-a73c-d3e9a875c7f0-kube-proxy\") pod \"kube-proxy-2hhwg\" (UID: \"c1f174d4-9e67-4370-a73c-d3e9a875c7f0\") " pod="kube-system/kube-proxy-2hhwg" May 7 23:45:42.109560 kubelet[3298]: I0507 23:45:42.109415 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-hostproc\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109560 kubelet[3298]: I0507 23:45:42.109430 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f174d4-9e67-4370-a73c-d3e9a875c7f0-xtables-lock\") pod \"kube-proxy-2hhwg\" (UID: \"c1f174d4-9e67-4370-a73c-d3e9a875c7f0\") " pod="kube-system/kube-proxy-2hhwg" May 7 23:45:42.109560 kubelet[3298]: I0507 23:45:42.109458 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbr7\" (UniqueName: \"kubernetes.io/projected/c1f174d4-9e67-4370-a73c-d3e9a875c7f0-kube-api-access-ftbr7\") pod \"kube-proxy-2hhwg\" (UID: \"c1f174d4-9e67-4370-a73c-d3e9a875c7f0\") " pod="kube-system/kube-proxy-2hhwg" May 7 23:45:42.109683 kubelet[3298]: I0507 23:45:42.109473 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-net\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109683 kubelet[3298]: I0507 23:45:42.109489 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-bpf-maps\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109683 kubelet[3298]: I0507 23:45:42.109503 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-lib-modules\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.109683 kubelet[3298]: I0507 23:45:42.109517 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqpxj\" (UniqueName: \"kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-kube-api-access-lqpxj\") pod \"cilium-hjl2c\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " pod="kube-system/cilium-hjl2c" May 7 23:45:42.228283 kubelet[3298]: E0507 23:45:42.228243 3298 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 7 23:45:42.228283 kubelet[3298]: E0507 23:45:42.228274 3298 projected.go:194] Error preparing data for projected volume kube-api-access-ftbr7 for pod kube-system/kube-proxy-2hhwg: configmap "kube-root-ca.crt" not found May 7 23:45:42.228458 kubelet[3298]: E0507 23:45:42.228346 3298 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1f174d4-9e67-4370-a73c-d3e9a875c7f0-kube-api-access-ftbr7 podName:c1f174d4-9e67-4370-a73c-d3e9a875c7f0 nodeName:}" failed. No retries permitted until 2025-05-07 23:45:42.728326816 +0000 UTC m=+6.350760769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ftbr7" (UniqueName: "kubernetes.io/projected/c1f174d4-9e67-4370-a73c-d3e9a875c7f0-kube-api-access-ftbr7") pod "kube-proxy-2hhwg" (UID: "c1f174d4-9e67-4370-a73c-d3e9a875c7f0") : configmap "kube-root-ca.crt" not found May 7 23:45:42.236558 kubelet[3298]: E0507 23:45:42.236520 3298 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 7 23:45:42.236558 kubelet[3298]: E0507 23:45:42.236553 3298 projected.go:194] Error preparing data for projected volume kube-api-access-lqpxj for pod kube-system/cilium-hjl2c: configmap "kube-root-ca.crt" not found May 7 23:45:42.236711 kubelet[3298]: E0507 23:45:42.236616 3298 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-kube-api-access-lqpxj podName:1542033d-985a-404b-aab0-bbc36d1e1a2e nodeName:}" failed. No retries permitted until 2025-05-07 23:45:42.73659701 +0000 UTC m=+6.359030923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lqpxj" (UniqueName: "kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-kube-api-access-lqpxj") pod "cilium-hjl2c" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e") : configmap "kube-root-ca.crt" not found May 7 23:45:42.425115 systemd[1]: Created slice kubepods-besteffort-podf7eb64cc_8db6_43f5_ab4d_6b5088ab18bd.slice - libcontainer container kubepods-besteffort-podf7eb64cc_8db6_43f5_ab4d_6b5088ab18bd.slice. May 7 23:45:42.513253 kubelet[3298]: I0507 23:45:42.513215 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-cilium-config-path\") pod \"cilium-operator-5d85765b45-lqnnp\" (UID: \"f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd\") " pod="kube-system/cilium-operator-5d85765b45-lqnnp" May 7 23:45:42.513253 kubelet[3298]: I0507 23:45:42.513278 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsq5w\" (UniqueName: \"kubernetes.io/projected/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-kube-api-access-lsq5w\") pod \"cilium-operator-5d85765b45-lqnnp\" (UID: \"f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd\") " pod="kube-system/cilium-operator-5d85765b45-lqnnp" May 7 23:45:42.731798 containerd[1741]: time="2025-05-07T23:45:42.731407372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lqnnp,Uid:f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd,Namespace:kube-system,Attempt:0,}" May 7 23:45:42.783452 containerd[1741]: time="2025-05-07T23:45:42.783307345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:42.783452 containerd[1741]: time="2025-05-07T23:45:42.783397065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:42.783734 containerd[1741]: time="2025-05-07T23:45:42.783407825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:42.784087 containerd[1741]: time="2025-05-07T23:45:42.783570585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:42.801224 systemd[1]: Started cri-containerd-43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388.scope - libcontainer container 43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388. May 7 23:45:42.832002 containerd[1741]: time="2025-05-07T23:45:42.831947199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lqnnp,Uid:f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388\"" May 7 23:45:42.835436 containerd[1741]: time="2025-05-07T23:45:42.835266237Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 7 23:45:42.929993 containerd[1741]: time="2025-05-07T23:45:42.929881467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2hhwg,Uid:c1f174d4-9e67-4370-a73c-d3e9a875c7f0,Namespace:kube-system,Attempt:0,}" May 7 23:45:42.938458 containerd[1741]: time="2025-05-07T23:45:42.938211343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjl2c,Uid:1542033d-985a-404b-aab0-bbc36d1e1a2e,Namespace:kube-system,Attempt:0,}" May 7 23:45:42.978090 containerd[1741]: time="2025-05-07T23:45:42.977942921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:42.978716 containerd[1741]: time="2025-05-07T23:45:42.978244881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:42.978716 containerd[1741]: time="2025-05-07T23:45:42.978576201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:42.978716 containerd[1741]: time="2025-05-07T23:45:42.978678961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:42.998332 systemd[1]: Started cri-containerd-31cc25bebe8044965e61685d7021c2f1ce6d08da4b22345bf4b3ada6851cefa9.scope - libcontainer container 31cc25bebe8044965e61685d7021c2f1ce6d08da4b22345bf4b3ada6851cefa9. May 7 23:45:43.003757 containerd[1741]: time="2025-05-07T23:45:43.003412668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:43.004214 containerd[1741]: time="2025-05-07T23:45:43.003502508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:43.004214 containerd[1741]: time="2025-05-07T23:45:43.004176148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:43.004493 containerd[1741]: time="2025-05-07T23:45:43.004441107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:43.024216 systemd[1]: Started cri-containerd-d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3.scope - libcontainer container d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3. May 7 23:45:43.041383 containerd[1741]: time="2025-05-07T23:45:43.041319648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2hhwg,Uid:c1f174d4-9e67-4370-a73c-d3e9a875c7f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"31cc25bebe8044965e61685d7021c2f1ce6d08da4b22345bf4b3ada6851cefa9\"" May 7 23:45:43.048125 containerd[1741]: time="2025-05-07T23:45:43.048074484Z" level=info msg="CreateContainer within sandbox \"31cc25bebe8044965e61685d7021c2f1ce6d08da4b22345bf4b3ada6851cefa9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 7 23:45:43.059522 containerd[1741]: time="2025-05-07T23:45:43.059480878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjl2c,Uid:1542033d-985a-404b-aab0-bbc36d1e1a2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\"" May 7 23:45:43.102795 containerd[1741]: time="2025-05-07T23:45:43.102740255Z" level=info msg="CreateContainer within sandbox \"31cc25bebe8044965e61685d7021c2f1ce6d08da4b22345bf4b3ada6851cefa9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2be72c4789ec4bd4e52f9a25b5e417267ea247d4b7ddecc39fa1adefcb74557c\"" May 7 23:45:43.104268 containerd[1741]: time="2025-05-07T23:45:43.104229174Z" level=info msg="StartContainer for \"2be72c4789ec4bd4e52f9a25b5e417267ea247d4b7ddecc39fa1adefcb74557c\"" May 7 23:45:43.130269 systemd[1]: Started cri-containerd-2be72c4789ec4bd4e52f9a25b5e417267ea247d4b7ddecc39fa1adefcb74557c.scope - libcontainer container 2be72c4789ec4bd4e52f9a25b5e417267ea247d4b7ddecc39fa1adefcb74557c. May 7 23:45:43.160948 containerd[1741]: time="2025-05-07T23:45:43.160901024Z" level=info msg="StartContainer for \"2be72c4789ec4bd4e52f9a25b5e417267ea247d4b7ddecc39fa1adefcb74557c\" returns successfully" May 7 23:45:43.592865 kubelet[3298]: I0507 23:45:43.592796 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2hhwg" podStartSLOduration=2.592776115 podStartE2EDuration="2.592776115s" podCreationTimestamp="2025-05-07 23:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:45:43.566778489 +0000 UTC m=+7.189212442" watchObservedRunningTime="2025-05-07 23:45:43.592776115 +0000 UTC m=+7.215210068" May 7 23:45:46.017066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043570313.mount: Deactivated successfully. May 7 23:45:46.947118 containerd[1741]: time="2025-05-07T23:45:46.947062856Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:46.949513 containerd[1741]: time="2025-05-07T23:45:46.949466215Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 7 23:45:46.953590 containerd[1741]: time="2025-05-07T23:45:46.953527172Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:46.955924 containerd[1741]: time="2025-05-07T23:45:46.955864011Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.120422974s" May 7 23:45:46.956200 containerd[1741]: time="2025-05-07T23:45:46.956063171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 7 23:45:46.957399 containerd[1741]: time="2025-05-07T23:45:46.957208290Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 7 23:45:46.959167 containerd[1741]: time="2025-05-07T23:45:46.959134369Z" level=info msg="CreateContainer within sandbox \"43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 7 23:45:47.010004 containerd[1741]: time="2025-05-07T23:45:47.009949942Z" level=info msg="CreateContainer within sandbox \"43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\"" May 7 23:45:47.011439 containerd[1741]: time="2025-05-07T23:45:47.010879422Z" level=info msg="StartContainer for \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\"" May 7 23:45:47.044229 systemd[1]: Started cri-containerd-a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7.scope - libcontainer container a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7. May 7 23:45:47.080588 containerd[1741]: time="2025-05-07T23:45:47.080537625Z" level=info msg="StartContainer for \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\" returns successfully" May 7 23:45:57.199828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125225005.mount: Deactivated successfully. May 7 23:45:59.705573 containerd[1741]: time="2025-05-07T23:45:59.705258149Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:59.707885 containerd[1741]: time="2025-05-07T23:45:59.707845707Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 7 23:45:59.713307 containerd[1741]: time="2025-05-07T23:45:59.713262824Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:59.715691 containerd[1741]: time="2025-05-07T23:45:59.715642982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.758400092s" May 7 23:45:59.715691 containerd[1741]: time="2025-05-07T23:45:59.715691262Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 7 23:45:59.718640 containerd[1741]: time="2025-05-07T23:45:59.718554780Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 7 23:45:59.749622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348433344.mount: Deactivated successfully. May 7 23:45:59.764768 containerd[1741]: time="2025-05-07T23:45:59.764534268Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\"" May 7 23:45:59.765156 containerd[1741]: time="2025-05-07T23:45:59.765098788Z" level=info msg="StartContainer for \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\"" May 7 23:45:59.798241 systemd[1]: Started cri-containerd-ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b.scope - libcontainer container ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b. May 7 23:45:59.825563 containerd[1741]: time="2025-05-07T23:45:59.824697546Z" level=info msg="StartContainer for \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\" returns successfully" May 7 23:45:59.830120 systemd[1]: cri-containerd-ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b.scope: Deactivated successfully. May 7 23:46:00.605349 kubelet[3298]: I0507 23:46:00.604793 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lqnnp" podStartSLOduration=14.48125159 podStartE2EDuration="18.604777923s" podCreationTimestamp="2025-05-07 23:45:42 +0000 UTC" firstStartedPulling="2025-05-07 23:45:42.833577478 +0000 UTC m=+6.456011431" lastFinishedPulling="2025-05-07 23:45:46.957103811 +0000 UTC m=+10.579537764" observedRunningTime="2025-05-07 23:45:47.631125133 +0000 UTC m=+11.253559086" watchObservedRunningTime="2025-05-07 23:46:00.604777923 +0000 UTC m=+24.227211876" May 7 23:46:00.745253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b-rootfs.mount: Deactivated successfully. May 7 23:46:00.949176 containerd[1741]: time="2025-05-07T23:46:00.948932683Z" level=info msg="shim disconnected" id=ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b namespace=k8s.io May 7 23:46:00.949176 containerd[1741]: time="2025-05-07T23:46:00.948987603Z" level=warning msg="cleaning up after shim disconnected" id=ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b namespace=k8s.io May 7 23:46:00.949176 containerd[1741]: time="2025-05-07T23:46:00.948995323Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:00.958381 containerd[1741]: time="2025-05-07T23:46:00.958264957Z" level=warning msg="cleanup warnings time=\"2025-05-07T23:46:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 7 23:46:01.591503 containerd[1741]: time="2025-05-07T23:46:01.591270316Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 7 23:46:01.633092 containerd[1741]: time="2025-05-07T23:46:01.633018767Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\"" May 7 23:46:01.633782 containerd[1741]: time="2025-05-07T23:46:01.633741326Z" level=info msg="StartContainer for \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\"" May 7 23:46:01.661214 systemd[1]: Started cri-containerd-e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e.scope - libcontainer container e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e. May 7 23:46:01.691211 containerd[1741]: time="2025-05-07T23:46:01.691082886Z" level=info msg="StartContainer for \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\" returns successfully" May 7 23:46:01.699961 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:46:01.700192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:46:01.700369 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 7 23:46:01.708391 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:46:01.708620 systemd[1]: cri-containerd-e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e.scope: Deactivated successfully. May 7 23:46:01.720798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:46:01.745808 containerd[1741]: time="2025-05-07T23:46:01.745624288Z" level=info msg="shim disconnected" id=e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e namespace=k8s.io May 7 23:46:01.745808 containerd[1741]: time="2025-05-07T23:46:01.745671968Z" level=warning msg="cleaning up after shim disconnected" id=e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e namespace=k8s.io May 7 23:46:01.745808 containerd[1741]: time="2025-05-07T23:46:01.745680288Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:01.746597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e-rootfs.mount: Deactivated successfully. May 7 23:46:02.594745 containerd[1741]: time="2025-05-07T23:46:02.594696857Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 7 23:46:02.636237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609219999.mount: Deactivated successfully. May 7 23:46:02.654337 containerd[1741]: time="2025-05-07T23:46:02.654279856Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\"" May 7 23:46:02.655194 containerd[1741]: time="2025-05-07T23:46:02.655161335Z" level=info msg="StartContainer for \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\"" May 7 23:46:02.691244 systemd[1]: Started cri-containerd-ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4.scope - libcontainer container ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4. May 7 23:46:02.719180 systemd[1]: cri-containerd-ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4.scope: Deactivated successfully. May 7 23:46:02.725586 containerd[1741]: time="2025-05-07T23:46:02.725521166Z" level=info msg="StartContainer for \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\" returns successfully" May 7 23:46:02.748111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4-rootfs.mount: Deactivated successfully. May 7 23:46:02.759525 containerd[1741]: time="2025-05-07T23:46:02.759436382Z" level=info msg="shim disconnected" id=ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4 namespace=k8s.io May 7 23:46:02.759525 containerd[1741]: time="2025-05-07T23:46:02.759517902Z" level=warning msg="cleaning up after shim disconnected" id=ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4 namespace=k8s.io May 7 23:46:02.759735 containerd[1741]: time="2025-05-07T23:46:02.759573942Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:03.599738 containerd[1741]: time="2025-05-07T23:46:03.598618478Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 7 23:46:03.649187 containerd[1741]: time="2025-05-07T23:46:03.649137363Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\"" May 7 23:46:03.649822 containerd[1741]: time="2025-05-07T23:46:03.649643962Z" level=info msg="StartContainer for \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\"" May 7 23:46:03.680230 systemd[1]: Started cri-containerd-ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738.scope - libcontainer container ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738. May 7 23:46:03.706641 systemd[1]: cri-containerd-ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738.scope: Deactivated successfully. May 7 23:46:03.713008 containerd[1741]: time="2025-05-07T23:46:03.712964598Z" level=info msg="StartContainer for \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\" returns successfully" May 7 23:46:03.742158 containerd[1741]: time="2025-05-07T23:46:03.742103298Z" level=info msg="shim disconnected" id=ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738 namespace=k8s.io May 7 23:46:03.742158 containerd[1741]: time="2025-05-07T23:46:03.742152218Z" level=warning msg="cleaning up after shim disconnected" id=ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738 namespace=k8s.io May 7 23:46:03.742158 containerd[1741]: time="2025-05-07T23:46:03.742160898Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:03.749174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738-rootfs.mount: Deactivated successfully. May 7 23:46:04.603276 containerd[1741]: time="2025-05-07T23:46:04.603231018Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 7 23:46:04.648227 containerd[1741]: time="2025-05-07T23:46:04.648177947Z" level=info msg="CreateContainer within sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\"" May 7 23:46:04.649006 containerd[1741]: time="2025-05-07T23:46:04.648941147Z" level=info msg="StartContainer for \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\"" May 7 23:46:04.681239 systemd[1]: Started cri-containerd-a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9.scope - libcontainer container a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9. May 7 23:46:04.715562 containerd[1741]: time="2025-05-07T23:46:04.715485460Z" level=info msg="StartContainer for \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\" returns successfully" May 7 23:46:04.835219 kubelet[3298]: I0507 23:46:04.834998 3298 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 7 23:46:04.878963 systemd[1]: Created slice kubepods-burstable-podb9ce4285_0a00_4e73_a446_a6614e489b86.slice - libcontainer container kubepods-burstable-podb9ce4285_0a00_4e73_a446_a6614e489b86.slice. May 7 23:46:04.893123 systemd[1]: Created slice kubepods-burstable-pod76c57618_0b9b_46d5_a8fe_f32847fe0a0c.slice - libcontainer container kubepods-burstable-pod76c57618_0b9b_46d5_a8fe_f32847fe0a0c.slice. May 7 23:46:04.966387 kubelet[3298]: I0507 23:46:04.966329 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxb76\" (UniqueName: \"kubernetes.io/projected/b9ce4285-0a00-4e73-a446-a6614e489b86-kube-api-access-xxb76\") pod \"coredns-6f6b679f8f-9fs5z\" (UID: \"b9ce4285-0a00-4e73-a446-a6614e489b86\") " pod="kube-system/coredns-6f6b679f8f-9fs5z" May 7 23:46:04.966387 kubelet[3298]: I0507 23:46:04.966378 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9ce4285-0a00-4e73-a446-a6614e489b86-config-volume\") pod \"coredns-6f6b679f8f-9fs5z\" (UID: \"b9ce4285-0a00-4e73-a446-a6614e489b86\") " pod="kube-system/coredns-6f6b679f8f-9fs5z" May 7 23:46:04.966387 kubelet[3298]: I0507 23:46:04.966396 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76c57618-0b9b-46d5-a8fe-f32847fe0a0c-config-volume\") pod \"coredns-6f6b679f8f-zczd7\" (UID: \"76c57618-0b9b-46d5-a8fe-f32847fe0a0c\") " pod="kube-system/coredns-6f6b679f8f-zczd7" May 7 23:46:04.966574 kubelet[3298]: I0507 23:46:04.966413 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8m7h\" (UniqueName: \"kubernetes.io/projected/76c57618-0b9b-46d5-a8fe-f32847fe0a0c-kube-api-access-m8m7h\") pod \"coredns-6f6b679f8f-zczd7\" (UID: \"76c57618-0b9b-46d5-a8fe-f32847fe0a0c\") " pod="kube-system/coredns-6f6b679f8f-zczd7" May 7 23:46:05.185122 containerd[1741]: time="2025-05-07T23:46:05.184724613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9fs5z,Uid:b9ce4285-0a00-4e73-a446-a6614e489b86,Namespace:kube-system,Attempt:0,}" May 7 23:46:05.201410 containerd[1741]: time="2025-05-07T23:46:05.201109122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zczd7,Uid:76c57618-0b9b-46d5-a8fe-f32847fe0a0c,Namespace:kube-system,Attempt:0,}" May 7 23:46:05.627548 kubelet[3298]: I0507 23:46:05.627443 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hjl2c" podStartSLOduration=7.9717398809999995 podStartE2EDuration="24.627424665s" podCreationTimestamp="2025-05-07 23:45:41 +0000 UTC" firstStartedPulling="2025-05-07 23:45:43.060893117 +0000 UTC m=+6.683327070" lastFinishedPulling="2025-05-07 23:45:59.716577901 +0000 UTC m=+23.339011854" observedRunningTime="2025-05-07 23:46:05.625314427 +0000 UTC m=+29.247748380" watchObservedRunningTime="2025-05-07 23:46:05.627424665 +0000 UTC m=+29.249858618" May 7 23:46:06.975339 systemd-networkd[1428]: cilium_host: Link UP May 7 23:46:06.975452 systemd-networkd[1428]: cilium_net: Link UP May 7 23:46:06.975455 systemd-networkd[1428]: cilium_net: Gained carrier May 7 23:46:06.975576 systemd-networkd[1428]: cilium_host: Gained carrier May 7 23:46:06.975696 systemd-networkd[1428]: cilium_host: Gained IPv6LL May 7 23:46:07.163618 systemd-networkd[1428]: cilium_vxlan: Link UP May 7 23:46:07.163627 systemd-networkd[1428]: cilium_vxlan: Gained carrier May 7 23:46:07.478060 kernel: NET: Registered PF_ALG protocol family May 7 23:46:07.848238 systemd-networkd[1428]: cilium_net: Gained IPv6LL May 7 23:46:08.232188 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL May 7 23:46:08.306622 systemd-networkd[1428]: lxc_health: Link UP May 7 23:46:08.325686 systemd-networkd[1428]: lxc_health: Gained carrier May 7 23:46:08.768309 kernel: eth0: renamed from tmp54232 May 7 23:46:08.772471 systemd-networkd[1428]: lxc719d2debb8d1: Link UP May 7 23:46:08.772704 systemd-networkd[1428]: lxc719d2debb8d1: Gained carrier May 7 23:46:08.810730 kernel: eth0: renamed from tmpf270e May 7 23:46:08.814154 systemd-networkd[1428]: lxc4cf02c4a7188: Link UP May 7 23:46:08.817866 systemd-networkd[1428]: lxc4cf02c4a7188: Gained carrier May 7 23:46:09.960236 systemd-networkd[1428]: lxc_health: Gained IPv6LL May 7 23:46:10.280215 systemd-networkd[1428]: lxc719d2debb8d1: Gained IPv6LL May 7 23:46:10.281374 systemd-networkd[1428]: lxc4cf02c4a7188: Gained IPv6LL May 7 23:46:12.205134 containerd[1741]: time="2025-05-07T23:46:12.204179329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:46:12.205134 containerd[1741]: time="2025-05-07T23:46:12.204254849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:46:12.205134 containerd[1741]: time="2025-05-07T23:46:12.204265009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.205134 containerd[1741]: time="2025-05-07T23:46:12.204355848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.234221 systemd[1]: Started cri-containerd-54232c10611591f8197fffc7aa4b8b58f4d260e2af4aad29a9a132942bb97281.scope - libcontainer container 54232c10611591f8197fffc7aa4b8b58f4d260e2af4aad29a9a132942bb97281. May 7 23:46:12.269071 containerd[1741]: time="2025-05-07T23:46:12.268806647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9fs5z,Uid:b9ce4285-0a00-4e73-a446-a6614e489b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"54232c10611591f8197fffc7aa4b8b58f4d260e2af4aad29a9a132942bb97281\"" May 7 23:46:12.275094 containerd[1741]: time="2025-05-07T23:46:12.274923043Z" level=info msg="CreateContainer within sandbox \"54232c10611591f8197fffc7aa4b8b58f4d260e2af4aad29a9a132942bb97281\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:46:12.325122 containerd[1741]: time="2025-05-07T23:46:12.324978451Z" level=info msg="CreateContainer within sandbox \"54232c10611591f8197fffc7aa4b8b58f4d260e2af4aad29a9a132942bb97281\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c4f5d48c140566e82d2894207cb06d88ce2a36c5b3d01848b57679b2c5fc744\"" May 7 23:46:12.325893 containerd[1741]: time="2025-05-07T23:46:12.325862210Z" level=info msg="StartContainer for \"9c4f5d48c140566e82d2894207cb06d88ce2a36c5b3d01848b57679b2c5fc744\"" May 7 23:46:12.362283 systemd[1]: Started cri-containerd-9c4f5d48c140566e82d2894207cb06d88ce2a36c5b3d01848b57679b2c5fc744.scope - libcontainer container 9c4f5d48c140566e82d2894207cb06d88ce2a36c5b3d01848b57679b2c5fc744. May 7 23:46:12.385645 containerd[1741]: time="2025-05-07T23:46:12.385136292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:46:12.385645 containerd[1741]: time="2025-05-07T23:46:12.385273292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:46:12.387210 containerd[1741]: time="2025-05-07T23:46:12.386993411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.387210 containerd[1741]: time="2025-05-07T23:46:12.387111290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.408481 containerd[1741]: time="2025-05-07T23:46:12.408048517Z" level=info msg="StartContainer for \"9c4f5d48c140566e82d2894207cb06d88ce2a36c5b3d01848b57679b2c5fc744\" returns successfully" May 7 23:46:12.408237 systemd[1]: Started cri-containerd-f270e6b96cadc764c19be97c3354ba396513c73235106d10596b17abda5c0200.scope - libcontainer container f270e6b96cadc764c19be97c3354ba396513c73235106d10596b17abda5c0200. May 7 23:46:12.450564 containerd[1741]: time="2025-05-07T23:46:12.449182130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zczd7,Uid:76c57618-0b9b-46d5-a8fe-f32847fe0a0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f270e6b96cadc764c19be97c3354ba396513c73235106d10596b17abda5c0200\"" May 7 23:46:12.454610 containerd[1741]: time="2025-05-07T23:46:12.454559527Z" level=info msg="CreateContainer within sandbox \"f270e6b96cadc764c19be97c3354ba396513c73235106d10596b17abda5c0200\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:46:12.504182 containerd[1741]: time="2025-05-07T23:46:12.503276375Z" level=info msg="CreateContainer within sandbox \"f270e6b96cadc764c19be97c3354ba396513c73235106d10596b17abda5c0200\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b3841ec21e6a041ccd4ec294f2a4db46581568180edd473ad55d799e59459e7\"" May 7 23:46:12.504298 containerd[1741]: time="2025-05-07T23:46:12.504058895Z" level=info msg="StartContainer for \"0b3841ec21e6a041ccd4ec294f2a4db46581568180edd473ad55d799e59459e7\"" May 7 23:46:12.529229 systemd[1]: Started cri-containerd-0b3841ec21e6a041ccd4ec294f2a4db46581568180edd473ad55d799e59459e7.scope - libcontainer container 0b3841ec21e6a041ccd4ec294f2a4db46581568180edd473ad55d799e59459e7. May 7 23:46:12.562875 containerd[1741]: time="2025-05-07T23:46:12.562827137Z" level=info msg="StartContainer for \"0b3841ec21e6a041ccd4ec294f2a4db46581568180edd473ad55d799e59459e7\" returns successfully" May 7 23:46:12.656485 kubelet[3298]: I0507 23:46:12.656419 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9fs5z" podStartSLOduration=30.656403797 podStartE2EDuration="30.656403797s" podCreationTimestamp="2025-05-07 23:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:46:12.655560437 +0000 UTC m=+36.277994390" watchObservedRunningTime="2025-05-07 23:46:12.656403797 +0000 UTC m=+36.278837750" May 7 23:46:12.656887 kubelet[3298]: I0507 23:46:12.656514 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zczd7" podStartSLOduration=30.656509156 podStartE2EDuration="30.656509156s" podCreationTimestamp="2025-05-07 23:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:46:12.63550993 +0000 UTC m=+36.257943883" watchObservedRunningTime="2025-05-07 23:46:12.656509156 +0000 UTC m=+36.278943109" May 7 23:47:41.508020 systemd[1]: Started sshd@7-10.200.20.32:22-10.200.16.10:42796.service - OpenSSH per-connection server daemon (10.200.16.10:42796). May 7 23:47:41.931609 sshd[4678]: Accepted publickey for core from 10.200.16.10 port 42796 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:47:41.933328 sshd-session[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:41.938680 systemd-logind[1703]: New session 10 of user core. May 7 23:47:41.942226 systemd[1]: Started session-10.scope - Session 10 of User core. May 7 23:47:42.402410 sshd[4680]: Connection closed by 10.200.16.10 port 42796 May 7 23:47:42.403247 sshd-session[4678]: pam_unix(sshd:session): session closed for user core May 7 23:47:42.405600 systemd[1]: sshd@7-10.200.20.32:22-10.200.16.10:42796.service: Deactivated successfully. May 7 23:47:42.407411 systemd[1]: session-10.scope: Deactivated successfully. May 7 23:47:42.408886 systemd-logind[1703]: Session 10 logged out. Waiting for processes to exit. May 7 23:47:42.409829 systemd-logind[1703]: Removed session 10. May 7 23:47:47.487349 systemd[1]: Started sshd@8-10.200.20.32:22-10.200.16.10:42812.service - OpenSSH per-connection server daemon (10.200.16.10:42812). May 7 23:47:47.908211 sshd[4698]: Accepted publickey for core from 10.200.16.10 port 42812 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:47:47.909484 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:47.913987 systemd-logind[1703]: New session 11 of user core. May 7 23:47:47.921199 systemd[1]: Started session-11.scope - Session 11 of User core. May 7 23:47:48.294595 sshd[4700]: Connection closed by 10.200.16.10 port 42812 May 7 23:47:48.293695 sshd-session[4698]: pam_unix(sshd:session): session closed for user core May 7 23:47:48.296749 systemd[1]: sshd@8-10.200.20.32:22-10.200.16.10:42812.service: Deactivated successfully. May 7 23:47:48.298421 systemd[1]: session-11.scope: Deactivated successfully. May 7 23:47:48.300514 systemd-logind[1703]: Session 11 logged out. Waiting for processes to exit. May 7 23:47:48.301549 systemd-logind[1703]: Removed session 11. May 7 23:47:53.371758 systemd[1]: Started sshd@9-10.200.20.32:22-10.200.16.10:47088.service - OpenSSH per-connection server daemon (10.200.16.10:47088). May 7 23:47:53.797470 sshd[4713]: Accepted publickey for core from 10.200.16.10 port 47088 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:47:53.798779 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:53.804183 systemd-logind[1703]: New session 12 of user core. May 7 23:47:53.814227 systemd[1]: Started session-12.scope - Session 12 of User core. May 7 23:47:54.159439 sshd[4715]: Connection closed by 10.200.16.10 port 47088 May 7 23:47:54.160005 sshd-session[4713]: pam_unix(sshd:session): session closed for user core May 7 23:47:54.163920 systemd[1]: sshd@9-10.200.20.32:22-10.200.16.10:47088.service: Deactivated successfully. May 7 23:47:54.165950 systemd[1]: session-12.scope: Deactivated successfully. May 7 23:47:54.167436 systemd-logind[1703]: Session 12 logged out. Waiting for processes to exit. May 7 23:47:54.168352 systemd-logind[1703]: Removed session 12. May 7 23:47:59.239766 systemd[1]: Started sshd@10-10.200.20.32:22-10.200.16.10:54166.service - OpenSSH per-connection server daemon (10.200.16.10:54166). May 7 23:47:59.690523 sshd[4728]: Accepted publickey for core from 10.200.16.10 port 54166 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:47:59.692666 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:59.696608 systemd-logind[1703]: New session 13 of user core. May 7 23:47:59.701162 systemd[1]: Started session-13.scope - Session 13 of User core. May 7 23:48:00.069904 sshd[4730]: Connection closed by 10.200.16.10 port 54166 May 7 23:48:00.070503 sshd-session[4728]: pam_unix(sshd:session): session closed for user core May 7 23:48:00.074222 systemd[1]: sshd@10-10.200.20.32:22-10.200.16.10:54166.service: Deactivated successfully. May 7 23:48:00.076214 systemd[1]: session-13.scope: Deactivated successfully. May 7 23:48:00.077226 systemd-logind[1703]: Session 13 logged out. Waiting for processes to exit. May 7 23:48:00.078193 systemd-logind[1703]: Removed session 13. May 7 23:48:05.158311 systemd[1]: Started sshd@11-10.200.20.32:22-10.200.16.10:54168.service - OpenSSH per-connection server daemon (10.200.16.10:54168). May 7 23:48:05.609551 sshd[4742]: Accepted publickey for core from 10.200.16.10 port 54168 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:05.610797 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:05.614679 systemd-logind[1703]: New session 14 of user core. May 7 23:48:05.619178 systemd[1]: Started session-14.scope - Session 14 of User core. May 7 23:48:06.003572 sshd[4744]: Connection closed by 10.200.16.10 port 54168 May 7 23:48:06.004203 sshd-session[4742]: pam_unix(sshd:session): session closed for user core May 7 23:48:06.007636 systemd[1]: sshd@11-10.200.20.32:22-10.200.16.10:54168.service: Deactivated successfully. May 7 23:48:06.009633 systemd[1]: session-14.scope: Deactivated successfully. May 7 23:48:06.010358 systemd-logind[1703]: Session 14 logged out. Waiting for processes to exit. May 7 23:48:06.011709 systemd-logind[1703]: Removed session 14. May 7 23:48:06.082944 systemd[1]: Started sshd@12-10.200.20.32:22-10.200.16.10:54180.service - OpenSSH per-connection server daemon (10.200.16.10:54180). May 7 23:48:06.502832 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 54180 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:06.504678 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:06.508843 systemd-logind[1703]: New session 15 of user core. May 7 23:48:06.515214 systemd[1]: Started session-15.scope - Session 15 of User core. May 7 23:48:06.973193 sshd[4758]: Connection closed by 10.200.16.10 port 54180 May 7 23:48:06.972493 sshd-session[4756]: pam_unix(sshd:session): session closed for user core May 7 23:48:06.977191 systemd-logind[1703]: Session 15 logged out. Waiting for processes to exit. May 7 23:48:06.977397 systemd[1]: sshd@12-10.200.20.32:22-10.200.16.10:54180.service: Deactivated successfully. May 7 23:48:06.982973 systemd[1]: session-15.scope: Deactivated successfully. May 7 23:48:06.984819 systemd-logind[1703]: Removed session 15. May 7 23:48:07.052341 systemd[1]: Started sshd@13-10.200.20.32:22-10.200.16.10:54182.service - OpenSSH per-connection server daemon (10.200.16.10:54182). May 7 23:48:07.476584 sshd[4768]: Accepted publickey for core from 10.200.16.10 port 54182 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:07.477865 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:07.482649 systemd-logind[1703]: New session 16 of user core. May 7 23:48:07.489215 systemd[1]: Started session-16.scope - Session 16 of User core. May 7 23:48:07.842324 sshd[4770]: Connection closed by 10.200.16.10 port 54182 May 7 23:48:07.842839 sshd-session[4768]: pam_unix(sshd:session): session closed for user core May 7 23:48:07.846390 systemd[1]: sshd@13-10.200.20.32:22-10.200.16.10:54182.service: Deactivated successfully. May 7 23:48:07.850684 systemd[1]: session-16.scope: Deactivated successfully. May 7 23:48:07.851448 systemd-logind[1703]: Session 16 logged out. Waiting for processes to exit. May 7 23:48:07.852622 systemd-logind[1703]: Removed session 16. May 7 23:48:12.928292 systemd[1]: Started sshd@14-10.200.20.32:22-10.200.16.10:44808.service - OpenSSH per-connection server daemon (10.200.16.10:44808). May 7 23:48:13.348955 sshd[4783]: Accepted publickey for core from 10.200.16.10 port 44808 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:13.350092 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:13.355817 systemd-logind[1703]: New session 17 of user core. May 7 23:48:13.364248 systemd[1]: Started session-17.scope - Session 17 of User core. May 7 23:48:13.714067 sshd[4787]: Connection closed by 10.200.16.10 port 44808 May 7 23:48:13.714618 sshd-session[4783]: pam_unix(sshd:session): session closed for user core May 7 23:48:13.717948 systemd[1]: sshd@14-10.200.20.32:22-10.200.16.10:44808.service: Deactivated successfully. May 7 23:48:13.719614 systemd[1]: session-17.scope: Deactivated successfully. May 7 23:48:13.720449 systemd-logind[1703]: Session 17 logged out. Waiting for processes to exit. May 7 23:48:13.721805 systemd-logind[1703]: Removed session 17. May 7 23:48:18.798243 systemd[1]: Started sshd@15-10.200.20.32:22-10.200.16.10:44810.service - OpenSSH per-connection server daemon (10.200.16.10:44810). May 7 23:48:19.257684 sshd[4799]: Accepted publickey for core from 10.200.16.10 port 44810 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:19.259071 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:19.264156 systemd-logind[1703]: New session 18 of user core. May 7 23:48:19.268224 systemd[1]: Started session-18.scope - Session 18 of User core. May 7 23:48:19.654072 sshd[4801]: Connection closed by 10.200.16.10 port 44810 May 7 23:48:19.654722 sshd-session[4799]: pam_unix(sshd:session): session closed for user core May 7 23:48:19.658160 systemd[1]: sshd@15-10.200.20.32:22-10.200.16.10:44810.service: Deactivated successfully. May 7 23:48:19.659887 systemd[1]: session-18.scope: Deactivated successfully. May 7 23:48:19.660638 systemd-logind[1703]: Session 18 logged out. Waiting for processes to exit. May 7 23:48:19.661794 systemd-logind[1703]: Removed session 18. May 7 23:48:19.729757 systemd[1]: Started sshd@16-10.200.20.32:22-10.200.16.10:37098.service - OpenSSH per-connection server daemon (10.200.16.10:37098). May 7 23:48:20.149283 sshd[4813]: Accepted publickey for core from 10.200.16.10 port 37098 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:20.150587 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:20.156131 systemd-logind[1703]: New session 19 of user core. May 7 23:48:20.166210 systemd[1]: Started session-19.scope - Session 19 of User core. May 7 23:48:20.585462 sshd[4815]: Connection closed by 10.200.16.10 port 37098 May 7 23:48:20.586190 sshd-session[4813]: pam_unix(sshd:session): session closed for user core May 7 23:48:20.589742 systemd[1]: sshd@16-10.200.20.32:22-10.200.16.10:37098.service: Deactivated successfully. May 7 23:48:20.592544 systemd[1]: session-19.scope: Deactivated successfully. May 7 23:48:20.593444 systemd-logind[1703]: Session 19 logged out. Waiting for processes to exit. May 7 23:48:20.594496 systemd-logind[1703]: Removed session 19. May 7 23:48:20.669322 systemd[1]: Started sshd@17-10.200.20.32:22-10.200.16.10:37108.service - OpenSSH per-connection server daemon (10.200.16.10:37108). May 7 23:48:21.091576 sshd[4825]: Accepted publickey for core from 10.200.16.10 port 37108 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:21.092912 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:21.097640 systemd-logind[1703]: New session 20 of user core. May 7 23:48:21.102210 systemd[1]: Started session-20.scope - Session 20 of User core. May 7 23:48:22.709389 sshd[4827]: Connection closed by 10.200.16.10 port 37108 May 7 23:48:22.709838 sshd-session[4825]: pam_unix(sshd:session): session closed for user core May 7 23:48:22.713638 systemd[1]: sshd@17-10.200.20.32:22-10.200.16.10:37108.service: Deactivated successfully. May 7 23:48:22.715802 systemd[1]: session-20.scope: Deactivated successfully. May 7 23:48:22.717237 systemd-logind[1703]: Session 20 logged out. Waiting for processes to exit. May 7 23:48:22.718307 systemd-logind[1703]: Removed session 20. May 7 23:48:22.795298 systemd[1]: Started sshd@18-10.200.20.32:22-10.200.16.10:37110.service - OpenSSH per-connection server daemon (10.200.16.10:37110). May 7 23:48:23.248905 sshd[4844]: Accepted publickey for core from 10.200.16.10 port 37110 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:23.250242 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:23.254863 systemd-logind[1703]: New session 21 of user core. May 7 23:48:23.260204 systemd[1]: Started session-21.scope - Session 21 of User core. May 7 23:48:23.758863 sshd[4846]: Connection closed by 10.200.16.10 port 37110 May 7 23:48:23.758253 sshd-session[4844]: pam_unix(sshd:session): session closed for user core May 7 23:48:23.761976 systemd[1]: sshd@18-10.200.20.32:22-10.200.16.10:37110.service: Deactivated successfully. May 7 23:48:23.764487 systemd[1]: session-21.scope: Deactivated successfully. May 7 23:48:23.765419 systemd-logind[1703]: Session 21 logged out. Waiting for processes to exit. May 7 23:48:23.766781 systemd-logind[1703]: Removed session 21. May 7 23:48:23.850505 systemd[1]: Started sshd@19-10.200.20.32:22-10.200.16.10:37116.service - OpenSSH per-connection server daemon (10.200.16.10:37116). May 7 23:48:24.303401 sshd[4856]: Accepted publickey for core from 10.200.16.10 port 37116 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:24.304745 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:24.310085 systemd-logind[1703]: New session 22 of user core. May 7 23:48:24.318192 systemd[1]: Started session-22.scope - Session 22 of User core. May 7 23:48:24.692486 sshd[4858]: Connection closed by 10.200.16.10 port 37116 May 7 23:48:24.693075 sshd-session[4856]: pam_unix(sshd:session): session closed for user core May 7 23:48:24.696699 systemd[1]: sshd@19-10.200.20.32:22-10.200.16.10:37116.service: Deactivated successfully. May 7 23:48:24.698375 systemd[1]: session-22.scope: Deactivated successfully. May 7 23:48:24.699622 systemd-logind[1703]: Session 22 logged out. Waiting for processes to exit. May 7 23:48:24.700862 systemd-logind[1703]: Removed session 22. May 7 23:48:29.773120 systemd[1]: Started sshd@20-10.200.20.32:22-10.200.16.10:58992.service - OpenSSH per-connection server daemon (10.200.16.10:58992). May 7 23:48:30.199641 sshd[4873]: Accepted publickey for core from 10.200.16.10 port 58992 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:30.201265 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:30.208106 systemd-logind[1703]: New session 23 of user core. May 7 23:48:30.214477 systemd[1]: Started session-23.scope - Session 23 of User core. May 7 23:48:30.583780 sshd[4875]: Connection closed by 10.200.16.10 port 58992 May 7 23:48:30.584799 sshd-session[4873]: pam_unix(sshd:session): session closed for user core May 7 23:48:30.587893 systemd[1]: sshd@20-10.200.20.32:22-10.200.16.10:58992.service: Deactivated successfully. May 7 23:48:30.589734 systemd[1]: session-23.scope: Deactivated successfully. May 7 23:48:30.590458 systemd-logind[1703]: Session 23 logged out. Waiting for processes to exit. May 7 23:48:30.591846 systemd-logind[1703]: Removed session 23. May 7 23:48:35.664293 systemd[1]: Started sshd@21-10.200.20.32:22-10.200.16.10:59008.service - OpenSSH per-connection server daemon (10.200.16.10:59008). May 7 23:48:36.081298 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 59008 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:36.082613 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:36.086704 systemd-logind[1703]: New session 24 of user core. May 7 23:48:36.095207 systemd[1]: Started session-24.scope - Session 24 of User core. May 7 23:48:36.466090 sshd[4888]: Connection closed by 10.200.16.10 port 59008 May 7 23:48:36.466590 sshd-session[4886]: pam_unix(sshd:session): session closed for user core May 7 23:48:36.469966 systemd[1]: sshd@21-10.200.20.32:22-10.200.16.10:59008.service: Deactivated successfully. May 7 23:48:36.472201 systemd[1]: session-24.scope: Deactivated successfully. May 7 23:48:36.473292 systemd-logind[1703]: Session 24 logged out. Waiting for processes to exit. May 7 23:48:36.474242 systemd-logind[1703]: Removed session 24. May 7 23:48:41.549314 systemd[1]: Started sshd@22-10.200.20.32:22-10.200.16.10:48124.service - OpenSSH per-connection server daemon (10.200.16.10:48124). May 7 23:48:41.971202 sshd[4902]: Accepted publickey for core from 10.200.16.10 port 48124 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:41.972484 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:41.976806 systemd-logind[1703]: New session 25 of user core. May 7 23:48:41.981230 systemd[1]: Started session-25.scope - Session 25 of User core. May 7 23:48:42.336062 sshd[4904]: Connection closed by 10.200.16.10 port 48124 May 7 23:48:42.336611 sshd-session[4902]: pam_unix(sshd:session): session closed for user core May 7 23:48:42.340328 systemd[1]: sshd@22-10.200.20.32:22-10.200.16.10:48124.service: Deactivated successfully. May 7 23:48:42.341948 systemd[1]: session-25.scope: Deactivated successfully. May 7 23:48:42.343595 systemd-logind[1703]: Session 25 logged out. Waiting for processes to exit. May 7 23:48:42.344999 systemd-logind[1703]: Removed session 25. May 7 23:48:42.421986 systemd[1]: Started sshd@23-10.200.20.32:22-10.200.16.10:48128.service - OpenSSH per-connection server daemon (10.200.16.10:48128). May 7 23:48:42.877255 sshd[4915]: Accepted publickey for core from 10.200.16.10 port 48128 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:42.878528 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:42.882549 systemd-logind[1703]: New session 26 of user core. May 7 23:48:42.890244 systemd[1]: Started session-26.scope - Session 26 of User core. May 7 23:48:45.647345 containerd[1741]: time="2025-05-07T23:48:45.647128524Z" level=info msg="StopContainer for \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\" with timeout 30 (s)" May 7 23:48:45.649600 containerd[1741]: time="2025-05-07T23:48:45.648749683Z" level=info msg="Stop container \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\" with signal terminated" May 7 23:48:45.666705 systemd[1]: cri-containerd-a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7.scope: Deactivated successfully. May 7 23:48:45.679819 containerd[1741]: time="2025-05-07T23:48:45.679744143Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:48:45.689142 containerd[1741]: time="2025-05-07T23:48:45.688501417Z" level=info msg="StopContainer for \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\" with timeout 2 (s)" May 7 23:48:45.689650 containerd[1741]: time="2025-05-07T23:48:45.689615377Z" level=info msg="Stop container \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\" with signal terminated" May 7 23:48:45.695495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7-rootfs.mount: Deactivated successfully. May 7 23:48:45.702867 systemd-networkd[1428]: lxc_health: Link DOWN May 7 23:48:45.702877 systemd-networkd[1428]: lxc_health: Lost carrier May 7 23:48:45.721937 systemd[1]: cri-containerd-a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9.scope: Deactivated successfully. May 7 23:48:45.722254 systemd[1]: cri-containerd-a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9.scope: Consumed 6.581s CPU time, 128M memory peak, 136K read from disk, 12.9M written to disk. May 7 23:48:45.735067 containerd[1741]: time="2025-05-07T23:48:45.733844869Z" level=info msg="shim disconnected" id=a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7 namespace=k8s.io May 7 23:48:45.735067 containerd[1741]: time="2025-05-07T23:48:45.733902229Z" level=warning msg="cleaning up after shim disconnected" id=a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7 namespace=k8s.io May 7 23:48:45.735067 containerd[1741]: time="2025-05-07T23:48:45.733912269Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:45.746888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9-rootfs.mount: Deactivated successfully. May 7 23:48:45.758544 containerd[1741]: time="2025-05-07T23:48:45.758427853Z" level=info msg="StopContainer for \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\" returns successfully" May 7 23:48:45.759663 containerd[1741]: time="2025-05-07T23:48:45.759425772Z" level=info msg="StopPodSandbox for \"43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388\"" May 7 23:48:45.759663 containerd[1741]: time="2025-05-07T23:48:45.759466372Z" level=info msg="Container to stop \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:48:45.761336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388-shm.mount: Deactivated successfully. May 7 23:48:45.769139 containerd[1741]: time="2025-05-07T23:48:45.769076526Z" level=info msg="shim disconnected" id=a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9 namespace=k8s.io May 7 23:48:45.769462 containerd[1741]: time="2025-05-07T23:48:45.769306766Z" level=warning msg="cleaning up after shim disconnected" id=a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9 namespace=k8s.io May 7 23:48:45.769462 containerd[1741]: time="2025-05-07T23:48:45.769321966Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:45.770021 systemd[1]: cri-containerd-43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388.scope: Deactivated successfully. May 7 23:48:45.793808 containerd[1741]: time="2025-05-07T23:48:45.793456351Z" level=warning msg="cleanup warnings time=\"2025-05-07T23:48:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 7 23:48:45.802048 containerd[1741]: time="2025-05-07T23:48:45.801992105Z" level=info msg="StopContainer for \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\" returns successfully" May 7 23:48:45.803504 containerd[1741]: time="2025-05-07T23:48:45.802814265Z" level=info msg="StopPodSandbox for \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\"" May 7 23:48:45.803504 containerd[1741]: time="2025-05-07T23:48:45.802849105Z" level=info msg="Container to stop \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:48:45.803504 containerd[1741]: time="2025-05-07T23:48:45.802874265Z" level=info msg="Container to stop \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:48:45.803504 containerd[1741]: time="2025-05-07T23:48:45.802932025Z" level=info msg="Container to stop \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:48:45.803504 containerd[1741]: time="2025-05-07T23:48:45.802944945Z" level=info msg="Container to stop \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:48:45.803504 containerd[1741]: time="2025-05-07T23:48:45.802952905Z" level=info msg="Container to stop \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:48:45.808582 systemd[1]: cri-containerd-d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3.scope: Deactivated successfully. May 7 23:48:45.823199 containerd[1741]: time="2025-05-07T23:48:45.822722292Z" level=info msg="shim disconnected" id=43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388 namespace=k8s.io May 7 23:48:45.823199 containerd[1741]: time="2025-05-07T23:48:45.822792492Z" level=warning msg="cleaning up after shim disconnected" id=43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388 namespace=k8s.io May 7 23:48:45.823199 containerd[1741]: time="2025-05-07T23:48:45.822801852Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:45.840230 containerd[1741]: time="2025-05-07T23:48:45.839887841Z" level=info msg="shim disconnected" id=d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3 namespace=k8s.io May 7 23:48:45.840230 containerd[1741]: time="2025-05-07T23:48:45.840227201Z" level=warning msg="cleaning up after shim disconnected" id=d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3 namespace=k8s.io May 7 23:48:45.840230 containerd[1741]: time="2025-05-07T23:48:45.840239681Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:45.840565 containerd[1741]: time="2025-05-07T23:48:45.840363401Z" level=warning msg="cleanup warnings time=\"2025-05-07T23:48:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 7 23:48:45.843769 containerd[1741]: time="2025-05-07T23:48:45.843489919Z" level=info msg="TearDown network for sandbox \"43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388\" successfully" May 7 23:48:45.843769 containerd[1741]: time="2025-05-07T23:48:45.843524999Z" level=info msg="StopPodSandbox for \"43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388\" returns successfully" May 7 23:48:45.862885 containerd[1741]: time="2025-05-07T23:48:45.862794027Z" level=info msg="TearDown network for sandbox \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" successfully" May 7 23:48:45.862885 containerd[1741]: time="2025-05-07T23:48:45.862829827Z" level=info msg="StopPodSandbox for \"d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3\" returns successfully" May 7 23:48:45.892619 kubelet[3298]: I0507 23:48:45.892574 3298 scope.go:117] "RemoveContainer" containerID="a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7" May 7 23:48:45.894417 containerd[1741]: time="2025-05-07T23:48:45.894103047Z" level=info msg="RemoveContainer for \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\"" May 7 23:48:45.909274 containerd[1741]: time="2025-05-07T23:48:45.908268238Z" level=info msg="RemoveContainer for \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\" returns successfully" May 7 23:48:45.909274 containerd[1741]: time="2025-05-07T23:48:45.909084357Z" level=error msg="ContainerStatus for \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\": not found" May 7 23:48:45.909411 kubelet[3298]: I0507 23:48:45.908636 3298 scope.go:117] "RemoveContainer" containerID="a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7" May 7 23:48:45.909411 kubelet[3298]: E0507 23:48:45.909234 3298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\": not found" containerID="a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7" May 7 23:48:45.909411 kubelet[3298]: I0507 23:48:45.909262 3298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7"} err="failed to get container status \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1c48b85dc79aabfefc98510999bada5196e0baaa34bf0b9faa8fc95f92751f7\": not found" May 7 23:48:45.909411 kubelet[3298]: I0507 23:48:45.909339 3298 scope.go:117] "RemoveContainer" containerID="a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9" May 7 23:48:45.910745 containerd[1741]: time="2025-05-07T23:48:45.910709716Z" level=info msg="RemoveContainer for \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\"" May 7 23:48:45.917916 containerd[1741]: time="2025-05-07T23:48:45.917870592Z" level=info msg="RemoveContainer for \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\" returns successfully" May 7 23:48:45.918253 kubelet[3298]: I0507 23:48:45.918151 3298 scope.go:117] "RemoveContainer" containerID="ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738" May 7 23:48:45.919586 containerd[1741]: time="2025-05-07T23:48:45.919549991Z" level=info msg="RemoveContainer for \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\"" May 7 23:48:45.928581 containerd[1741]: time="2025-05-07T23:48:45.928540505Z" level=info msg="RemoveContainer for \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\" returns successfully" May 7 23:48:45.928801 kubelet[3298]: I0507 23:48:45.928774 3298 scope.go:117] "RemoveContainer" containerID="ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4" May 7 23:48:45.930156 containerd[1741]: time="2025-05-07T23:48:45.929893144Z" level=info msg="RemoveContainer for \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\"" May 7 23:48:45.939331 containerd[1741]: time="2025-05-07T23:48:45.939252418Z" level=info msg="RemoveContainer for \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\" returns successfully" May 7 23:48:45.939647 kubelet[3298]: I0507 23:48:45.939472 3298 scope.go:117] "RemoveContainer" containerID="e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e" May 7 23:48:45.940702 containerd[1741]: time="2025-05-07T23:48:45.940672017Z" level=info msg="RemoveContainer for \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\"" May 7 23:48:45.949294 containerd[1741]: time="2025-05-07T23:48:45.949252452Z" level=info msg="RemoveContainer for \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\" returns successfully" May 7 23:48:45.949603 kubelet[3298]: I0507 23:48:45.949572 3298 scope.go:117] "RemoveContainer" containerID="ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b" May 7 23:48:45.951010 containerd[1741]: time="2025-05-07T23:48:45.950979651Z" level=info msg="RemoveContainer for \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\"" May 7 23:48:45.962057 kubelet[3298]: I0507 23:48:45.960508 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-cilium-config-path\") pod \"f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd\" (UID: \"f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd\") " May 7 23:48:45.962057 kubelet[3298]: I0507 23:48:45.960547 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cni-path\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962057 kubelet[3298]: I0507 23:48:45.960566 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-xtables-lock\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962057 kubelet[3298]: I0507 23:48:45.960584 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsq5w\" (UniqueName: \"kubernetes.io/projected/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-kube-api-access-lsq5w\") pod \"f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd\" (UID: \"f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd\") " May 7 23:48:45.962057 kubelet[3298]: I0507 23:48:45.960600 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-lib-modules\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962057 kubelet[3298]: I0507 23:48:45.960614 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-run\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962282 containerd[1741]: time="2025-05-07T23:48:45.961461084Z" level=info msg="RemoveContainer for \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\" returns successfully" May 7 23:48:45.962314 kubelet[3298]: I0507 23:48:45.960628 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-net\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962314 kubelet[3298]: I0507 23:48:45.960642 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-kernel\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962314 kubelet[3298]: I0507 23:48:45.960666 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-hostproc\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962314 kubelet[3298]: I0507 23:48:45.960681 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-etc-cni-netd\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:45.962314 kubelet[3298]: I0507 23:48:45.960750 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.962423 kubelet[3298]: I0507 23:48:45.961571 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.962423 kubelet[3298]: I0507 23:48:45.961605 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cni-path" (OuterVolumeSpecName: "cni-path") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.962423 kubelet[3298]: I0507 23:48:45.961621 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.962840 kubelet[3298]: I0507 23:48:45.962711 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.962896 kubelet[3298]: I0507 23:48:45.962855 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.962896 kubelet[3298]: I0507 23:48:45.962874 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.963176 kubelet[3298]: I0507 23:48:45.963149 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-hostproc" (OuterVolumeSpecName: "hostproc") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:45.963438 kubelet[3298]: I0507 23:48:45.963417 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd" (UID: "f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 7 23:48:45.963922 kubelet[3298]: I0507 23:48:45.963901 3298 scope.go:117] "RemoveContainer" containerID="a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9" May 7 23:48:45.964415 containerd[1741]: time="2025-05-07T23:48:45.964363802Z" level=error msg="ContainerStatus for \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\": not found" May 7 23:48:45.964708 kubelet[3298]: E0507 23:48:45.964678 3298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\": not found" containerID="a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9" May 7 23:48:45.964766 kubelet[3298]: I0507 23:48:45.964711 3298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9"} err="failed to get container status \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5683f80a38b2f787093414edb6ee5dc912262333596417053ad5f0e2ddfc6a9\": not found" May 7 23:48:45.964766 kubelet[3298]: I0507 23:48:45.964731 3298 scope.go:117] "RemoveContainer" containerID="ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738" May 7 23:48:45.965272 containerd[1741]: time="2025-05-07T23:48:45.965209842Z" level=error msg="ContainerStatus for \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\": not found" May 7 23:48:45.965743 kubelet[3298]: E0507 23:48:45.965608 3298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\": not found" containerID="ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738" May 7 23:48:45.965743 kubelet[3298]: I0507 23:48:45.965636 3298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738"} err="failed to get container status \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef52ea759dd0e6aa5d16ef1f1648ae36292abc338d3c70b7ba6fd06605057738\": not found" May 7 23:48:45.965743 kubelet[3298]: I0507 23:48:45.965645 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-kube-api-access-lsq5w" (OuterVolumeSpecName: "kube-api-access-lsq5w") pod "f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd" (UID: "f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd"). InnerVolumeSpecName "kube-api-access-lsq5w". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:48:45.965743 kubelet[3298]: I0507 23:48:45.965654 3298 scope.go:117] "RemoveContainer" containerID="ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4" May 7 23:48:45.966772 containerd[1741]: time="2025-05-07T23:48:45.966614921Z" level=error msg="ContainerStatus for \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\": not found" May 7 23:48:45.967019 kubelet[3298]: E0507 23:48:45.966958 3298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\": not found" containerID="ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4" May 7 23:48:45.967019 kubelet[3298]: I0507 23:48:45.966982 3298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4"} err="failed to get container status \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca521b046b8a79235ab4916ccfcde7567d26e5bdb20ff719b3bf1cf1f775d0d4\": not found" May 7 23:48:45.967019 kubelet[3298]: I0507 23:48:45.966998 3298 scope.go:117] "RemoveContainer" containerID="e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e" May 7 23:48:45.967580 containerd[1741]: time="2025-05-07T23:48:45.967489000Z" level=error msg="ContainerStatus for \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\": not found" May 7 23:48:45.967807 kubelet[3298]: E0507 23:48:45.967703 3298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\": not found" containerID="e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e" May 7 23:48:45.967807 kubelet[3298]: I0507 23:48:45.967734 3298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e"} err="failed to get container status \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7a27f8b1c9299fcceac8be44525506fe4b7f38d76030f0d7d1dbf48ecdf252e\": not found" May 7 23:48:45.967807 kubelet[3298]: I0507 23:48:45.967750 3298 scope.go:117] "RemoveContainer" containerID="ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b" May 7 23:48:45.968228 containerd[1741]: time="2025-05-07T23:48:45.967999200Z" level=error msg="ContainerStatus for \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\": not found" May 7 23:48:45.968409 kubelet[3298]: E0507 23:48:45.968198 3298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\": not found" containerID="ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b" May 7 23:48:45.968409 kubelet[3298]: I0507 23:48:45.968343 3298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b"} err="failed to get container status \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff55964e87e363b828ea77cffc06554e78831ecefed5f58406f391c001363b3b\": not found" May 7 23:48:46.061460 kubelet[3298]: I0507 23:48:46.060900 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqpxj\" (UniqueName: \"kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-kube-api-access-lqpxj\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:46.061460 kubelet[3298]: I0507 23:48:46.060942 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1542033d-985a-404b-aab0-bbc36d1e1a2e-clustermesh-secrets\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:46.061460 kubelet[3298]: I0507 23:48:46.060962 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-config-path\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:46.061460 kubelet[3298]: I0507 23:48:46.060981 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-cgroup\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:46.061460 kubelet[3298]: I0507 23:48:46.061016 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-hubble-tls\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:46.061460 kubelet[3298]: I0507 23:48:46.061057 3298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-bpf-maps\") pod \"1542033d-985a-404b-aab0-bbc36d1e1a2e\" (UID: \"1542033d-985a-404b-aab0-bbc36d1e1a2e\") " May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061092 3298 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-run\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061103 3298 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-lib-modules\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061111 3298 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-kernel\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061121 3298 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-hostproc\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061129 3298 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-host-proc-sys-net\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061138 3298 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-etc-cni-netd\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061149 3298 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cni-path\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061695 kubelet[3298]: I0507 23:48:46.061176 3298 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-cilium-config-path\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061851 kubelet[3298]: I0507 23:48:46.061187 3298 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-xtables-lock\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061851 kubelet[3298]: I0507 23:48:46.061196 3298 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lsq5w\" (UniqueName: \"kubernetes.io/projected/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd-kube-api-access-lsq5w\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.061851 kubelet[3298]: I0507 23:48:46.061222 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:46.062082 kubelet[3298]: I0507 23:48:46.062059 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:48:46.064290 kubelet[3298]: I0507 23:48:46.064249 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-kube-api-access-lqpxj" (OuterVolumeSpecName: "kube-api-access-lqpxj") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "kube-api-access-lqpxj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:48:46.064500 kubelet[3298]: I0507 23:48:46.064329 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1542033d-985a-404b-aab0-bbc36d1e1a2e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 7 23:48:46.065683 kubelet[3298]: I0507 23:48:46.065616 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 7 23:48:46.066389 kubelet[3298]: I0507 23:48:46.066347 3298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1542033d-985a-404b-aab0-bbc36d1e1a2e" (UID: "1542033d-985a-404b-aab0-bbc36d1e1a2e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:48:46.161745 kubelet[3298]: I0507 23:48:46.161588 3298 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-bpf-maps\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.161745 kubelet[3298]: I0507 23:48:46.161618 3298 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lqpxj\" (UniqueName: \"kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-kube-api-access-lqpxj\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.161745 kubelet[3298]: I0507 23:48:46.161628 3298 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1542033d-985a-404b-aab0-bbc36d1e1a2e-clustermesh-secrets\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.161745 kubelet[3298]: I0507 23:48:46.161639 3298 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-config-path\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.161745 kubelet[3298]: I0507 23:48:46.161647 3298 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1542033d-985a-404b-aab0-bbc36d1e1a2e-cilium-cgroup\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.161745 kubelet[3298]: I0507 23:48:46.161656 3298 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1542033d-985a-404b-aab0-bbc36d1e1a2e-hubble-tls\") on node \"ci-4230.1.1-n-afbb805c8a\" DevicePath \"\"" May 7 23:48:46.196551 systemd[1]: Removed slice kubepods-besteffort-podf7eb64cc_8db6_43f5_ab4d_6b5088ab18bd.slice - libcontainer container kubepods-besteffort-podf7eb64cc_8db6_43f5_ab4d_6b5088ab18bd.slice. May 7 23:48:46.203752 systemd[1]: Removed slice kubepods-burstable-pod1542033d_985a_404b_aab0_bbc36d1e1a2e.slice - libcontainer container kubepods-burstable-pod1542033d_985a_404b_aab0_bbc36d1e1a2e.slice. May 7 23:48:46.204063 systemd[1]: kubepods-burstable-pod1542033d_985a_404b_aab0_bbc36d1e1a2e.slice: Consumed 6.649s CPU time, 128.4M memory peak, 136K read from disk, 12.9M written to disk. May 7 23:48:46.512433 kubelet[3298]: I0507 23:48:46.512327 3298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1542033d-985a-404b-aab0-bbc36d1e1a2e" path="/var/lib/kubelet/pods/1542033d-985a-404b-aab0-bbc36d1e1a2e/volumes" May 7 23:48:46.512984 kubelet[3298]: I0507 23:48:46.512880 3298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd" path="/var/lib/kubelet/pods/f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd/volumes" May 7 23:48:46.598931 kubelet[3298]: E0507 23:48:46.598817 3298 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 7 23:48:46.656067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3-rootfs.mount: Deactivated successfully. May 7 23:48:46.656163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d853f1bf52972050b1628ded508da81b645e927d03cd5d6caa5c44d6368112f3-shm.mount: Deactivated successfully. May 7 23:48:46.656223 systemd[1]: var-lib-kubelet-pods-1542033d\x2d985a\x2d404b\x2daab0\x2dbbc36d1e1a2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlqpxj.mount: Deactivated successfully. May 7 23:48:46.656281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43ae35bebccc7ad0cc965de54365749d1c2d7cbce201880e89fcbc15c0877388-rootfs.mount: Deactivated successfully. May 7 23:48:46.656326 systemd[1]: var-lib-kubelet-pods-f7eb64cc\x2d8db6\x2d43f5\x2dab4d\x2d6b5088ab18bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlsq5w.mount: Deactivated successfully. May 7 23:48:46.656373 systemd[1]: var-lib-kubelet-pods-1542033d\x2d985a\x2d404b\x2daab0\x2dbbc36d1e1a2e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 7 23:48:46.656421 systemd[1]: var-lib-kubelet-pods-1542033d\x2d985a\x2d404b\x2daab0\x2dbbc36d1e1a2e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 7 23:48:47.668658 sshd[4917]: Connection closed by 10.200.16.10 port 48128 May 7 23:48:47.669297 sshd-session[4915]: pam_unix(sshd:session): session closed for user core May 7 23:48:47.673526 systemd[1]: sshd@23-10.200.20.32:22-10.200.16.10:48128.service: Deactivated successfully. May 7 23:48:47.675334 systemd[1]: session-26.scope: Deactivated successfully. May 7 23:48:47.676193 systemd[1]: session-26.scope: Consumed 1.882s CPU time, 25.7M memory peak. May 7 23:48:47.677079 systemd-logind[1703]: Session 26 logged out. Waiting for processes to exit. May 7 23:48:47.678017 systemd-logind[1703]: Removed session 26. May 7 23:48:47.756184 systemd[1]: Started sshd@24-10.200.20.32:22-10.200.16.10:48132.service - OpenSSH per-connection server daemon (10.200.16.10:48132). May 7 23:48:48.215519 sshd[5079]: Accepted publickey for core from 10.200.16.10 port 48132 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:48.216795 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:48.220878 systemd-logind[1703]: New session 27 of user core. May 7 23:48:48.229212 systemd[1]: Started session-27.scope - Session 27 of User core. May 7 23:48:49.393470 kubelet[3298]: E0507 23:48:49.393403 3298 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1542033d-985a-404b-aab0-bbc36d1e1a2e" containerName="mount-cgroup" May 7 23:48:49.393470 kubelet[3298]: E0507 23:48:49.393444 3298 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1542033d-985a-404b-aab0-bbc36d1e1a2e" containerName="mount-bpf-fs" May 7 23:48:49.393470 kubelet[3298]: E0507 23:48:49.393451 3298 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1542033d-985a-404b-aab0-bbc36d1e1a2e" containerName="clean-cilium-state" May 7 23:48:49.393470 kubelet[3298]: E0507 23:48:49.393457 3298 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1542033d-985a-404b-aab0-bbc36d1e1a2e" containerName="cilium-agent" May 7 23:48:49.393470 kubelet[3298]: E0507 23:48:49.393463 3298 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd" containerName="cilium-operator" May 7 23:48:49.393470 kubelet[3298]: E0507 23:48:49.393469 3298 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1542033d-985a-404b-aab0-bbc36d1e1a2e" containerName="apply-sysctl-overwrites" May 7 23:48:49.393944 kubelet[3298]: I0507 23:48:49.393493 3298 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7eb64cc-8db6-43f5-ab4d-6b5088ab18bd" containerName="cilium-operator" May 7 23:48:49.393944 kubelet[3298]: I0507 23:48:49.393500 3298 memory_manager.go:354] "RemoveStaleState removing state" podUID="1542033d-985a-404b-aab0-bbc36d1e1a2e" containerName="cilium-agent" May 7 23:48:49.403394 systemd[1]: Created slice kubepods-burstable-podfdf34396_0d6e_4ea3_b568_24da37512825.slice - libcontainer container kubepods-burstable-podfdf34396_0d6e_4ea3_b568_24da37512825.slice. May 7 23:48:49.454574 sshd[5081]: Connection closed by 10.200.16.10 port 48132 May 7 23:48:49.456349 sshd-session[5079]: pam_unix(sshd:session): session closed for user core May 7 23:48:49.461715 systemd[1]: sshd@24-10.200.20.32:22-10.200.16.10:48132.service: Deactivated successfully. May 7 23:48:49.463630 systemd[1]: session-27.scope: Deactivated successfully. May 7 23:48:49.464401 systemd-logind[1703]: Session 27 logged out. Waiting for processes to exit. May 7 23:48:49.465367 systemd-logind[1703]: Removed session 27. May 7 23:48:49.479236 kubelet[3298]: I0507 23:48:49.478856 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-cilium-cgroup\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479236 kubelet[3298]: I0507 23:48:49.478894 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-host-proc-sys-kernel\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479236 kubelet[3298]: I0507 23:48:49.478915 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jlz\" (UniqueName: \"kubernetes.io/projected/fdf34396-0d6e-4ea3-b568-24da37512825-kube-api-access-m6jlz\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479236 kubelet[3298]: I0507 23:48:49.478944 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-cilium-run\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479236 kubelet[3298]: I0507 23:48:49.478961 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-cni-path\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479236 kubelet[3298]: I0507 23:48:49.478976 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-host-proc-sys-net\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479520 kubelet[3298]: I0507 23:48:49.478993 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-lib-modules\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479520 kubelet[3298]: I0507 23:48:49.479010 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-xtables-lock\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479520 kubelet[3298]: I0507 23:48:49.479035 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fdf34396-0d6e-4ea3-b568-24da37512825-cilium-ipsec-secrets\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479520 kubelet[3298]: I0507 23:48:49.479054 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdf34396-0d6e-4ea3-b568-24da37512825-hubble-tls\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479520 kubelet[3298]: I0507 23:48:49.479069 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdf34396-0d6e-4ea3-b568-24da37512825-cilium-config-path\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479520 kubelet[3298]: I0507 23:48:49.479085 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-hostproc\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479698 kubelet[3298]: I0507 23:48:49.479101 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-etc-cni-netd\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479698 kubelet[3298]: I0507 23:48:49.479115 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdf34396-0d6e-4ea3-b568-24da37512825-clustermesh-secrets\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.479698 kubelet[3298]: I0507 23:48:49.479131 3298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdf34396-0d6e-4ea3-b568-24da37512825-bpf-maps\") pod \"cilium-9bszp\" (UID: \"fdf34396-0d6e-4ea3-b568-24da37512825\") " pod="kube-system/cilium-9bszp" May 7 23:48:49.531351 systemd[1]: Started sshd@25-10.200.20.32:22-10.200.16.10:47764.service - OpenSSH per-connection server daemon (10.200.16.10:47764). May 7 23:48:49.710045 containerd[1741]: time="2025-05-07T23:48:49.709914016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9bszp,Uid:fdf34396-0d6e-4ea3-b568-24da37512825,Namespace:kube-system,Attempt:0,}" May 7 23:48:49.751336 containerd[1741]: time="2025-05-07T23:48:49.750906187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:48:49.751522 containerd[1741]: time="2025-05-07T23:48:49.751357266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:48:49.751522 containerd[1741]: time="2025-05-07T23:48:49.751375306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:48:49.751668 containerd[1741]: time="2025-05-07T23:48:49.751542866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:48:49.767180 systemd[1]: Started cri-containerd-52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006.scope - libcontainer container 52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006. May 7 23:48:49.788314 containerd[1741]: time="2025-05-07T23:48:49.788126640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9bszp,Uid:fdf34396-0d6e-4ea3-b568-24da37512825,Namespace:kube-system,Attempt:0,} returns sandbox id \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\"" May 7 23:48:49.791747 containerd[1741]: time="2025-05-07T23:48:49.791705958Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 7 23:48:49.833957 containerd[1741]: time="2025-05-07T23:48:49.833901608Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f\"" May 7 23:48:49.835198 containerd[1741]: time="2025-05-07T23:48:49.835105847Z" level=info msg="StartContainer for \"f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f\"" May 7 23:48:49.857188 systemd[1]: Started cri-containerd-f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f.scope - libcontainer container f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f. May 7 23:48:49.885171 containerd[1741]: time="2025-05-07T23:48:49.885126092Z" level=info msg="StartContainer for \"f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f\" returns successfully" May 7 23:48:49.890052 systemd[1]: cri-containerd-f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f.scope: Deactivated successfully. May 7 23:48:49.950062 sshd[5091]: Accepted publickey for core from 10.200.16.10 port 47764 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:49.952344 containerd[1741]: time="2025-05-07T23:48:49.951783845Z" level=info msg="shim disconnected" id=f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f namespace=k8s.io May 7 23:48:49.952344 containerd[1741]: time="2025-05-07T23:48:49.951850845Z" level=warning msg="cleaning up after shim disconnected" id=f3e6ae70fe718dcbbe28e60d47840ea6edd6ffb0e18a73d770ac44ef3ab8563f namespace=k8s.io May 7 23:48:49.952344 containerd[1741]: time="2025-05-07T23:48:49.951860045Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:49.952207 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:49.959247 systemd-logind[1703]: New session 28 of user core. May 7 23:48:49.963204 systemd[1]: Started session-28.scope - Session 28 of User core. May 7 23:48:49.979671 kubelet[3298]: I0507 23:48:49.979618 3298 setters.go:600] "Node became not ready" node="ci-4230.1.1-n-afbb805c8a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-07T23:48:49Z","lastTransitionTime":"2025-05-07T23:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 7 23:48:50.248042 sshd[5198]: Connection closed by 10.200.16.10 port 47764 May 7 23:48:50.248697 sshd-session[5091]: pam_unix(sshd:session): session closed for user core May 7 23:48:50.252279 systemd[1]: sshd@25-10.200.20.32:22-10.200.16.10:47764.service: Deactivated successfully. May 7 23:48:50.254568 systemd[1]: session-28.scope: Deactivated successfully. May 7 23:48:50.255570 systemd-logind[1703]: Session 28 logged out. Waiting for processes to exit. May 7 23:48:50.256780 systemd-logind[1703]: Removed session 28. May 7 23:48:50.329284 systemd[1]: Started sshd@26-10.200.20.32:22-10.200.16.10:47780.service - OpenSSH per-connection server daemon (10.200.16.10:47780). May 7 23:48:50.747509 sshd[5206]: Accepted publickey for core from 10.200.16.10 port 47780 ssh2: RSA SHA256:Xf+vpbdJh3/esr9OIwFY2Rj6mXkq4UjyjZIQRU1uG/Y May 7 23:48:50.748762 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:48:50.753305 systemd-logind[1703]: New session 29 of user core. May 7 23:48:50.765288 systemd[1]: Started session-29.scope - Session 29 of User core. May 7 23:48:50.918460 containerd[1741]: time="2025-05-07T23:48:50.917482242Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 7 23:48:50.957806 containerd[1741]: time="2025-05-07T23:48:50.957758693Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b\"" May 7 23:48:50.958706 containerd[1741]: time="2025-05-07T23:48:50.958230453Z" level=info msg="StartContainer for \"d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b\"" May 7 23:48:50.986207 systemd[1]: Started cri-containerd-d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b.scope - libcontainer container d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b. May 7 23:48:51.019483 containerd[1741]: time="2025-05-07T23:48:51.019360290Z" level=info msg="StartContainer for \"d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b\" returns successfully" May 7 23:48:51.025594 systemd[1]: cri-containerd-d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b.scope: Deactivated successfully. May 7 23:48:51.071233 containerd[1741]: time="2025-05-07T23:48:51.071164613Z" level=info msg="shim disconnected" id=d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b namespace=k8s.io May 7 23:48:51.071233 containerd[1741]: time="2025-05-07T23:48:51.071221253Z" level=warning msg="cleaning up after shim disconnected" id=d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b namespace=k8s.io May 7 23:48:51.071233 containerd[1741]: time="2025-05-07T23:48:51.071230533Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:51.586755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d149934612e8fabe858bb23a9545c9b7e5b5e58aed5eeebbcd0a9f203405342b-rootfs.mount: Deactivated successfully. May 7 23:48:51.599761 kubelet[3298]: E0507 23:48:51.599667 3298 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 7 23:48:51.920580 containerd[1741]: time="2025-05-07T23:48:51.920488293Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 7 23:48:51.973853 containerd[1741]: time="2025-05-07T23:48:51.973613015Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52\"" May 7 23:48:51.974224 containerd[1741]: time="2025-05-07T23:48:51.974193615Z" level=info msg="StartContainer for \"8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52\"" May 7 23:48:52.021231 systemd[1]: Started cri-containerd-8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52.scope - libcontainer container 8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52. May 7 23:48:52.052572 systemd[1]: cri-containerd-8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52.scope: Deactivated successfully. May 7 23:48:52.056135 containerd[1741]: time="2025-05-07T23:48:52.056012677Z" level=info msg="StartContainer for \"8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52\" returns successfully" May 7 23:48:52.086836 containerd[1741]: time="2025-05-07T23:48:52.086627895Z" level=info msg="shim disconnected" id=8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52 namespace=k8s.io May 7 23:48:52.086836 containerd[1741]: time="2025-05-07T23:48:52.086774775Z" level=warning msg="cleaning up after shim disconnected" id=8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52 namespace=k8s.io May 7 23:48:52.086836 containerd[1741]: time="2025-05-07T23:48:52.086784055Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:52.586716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dea7e5e85082ac2acb47f44326d5a590d1a586f463863ed82928bd6a01a2e52-rootfs.mount: Deactivated successfully. May 7 23:48:52.925828 containerd[1741]: time="2025-05-07T23:48:52.925788302Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 7 23:48:52.967890 containerd[1741]: time="2025-05-07T23:48:52.967835512Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a\"" May 7 23:48:52.968788 containerd[1741]: time="2025-05-07T23:48:52.968698271Z" level=info msg="StartContainer for \"9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a\"" May 7 23:48:52.999231 systemd[1]: Started cri-containerd-9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a.scope - libcontainer container 9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a. May 7 23:48:53.020786 systemd[1]: cri-containerd-9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a.scope: Deactivated successfully. May 7 23:48:53.027881 containerd[1741]: time="2025-05-07T23:48:53.027795990Z" level=info msg="StartContainer for \"9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a\" returns successfully" May 7 23:48:53.060175 containerd[1741]: time="2025-05-07T23:48:53.060083207Z" level=info msg="shim disconnected" id=9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a namespace=k8s.io May 7 23:48:53.060560 containerd[1741]: time="2025-05-07T23:48:53.060415327Z" level=warning msg="cleaning up after shim disconnected" id=9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a namespace=k8s.io May 7 23:48:53.060560 containerd[1741]: time="2025-05-07T23:48:53.060432647Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:53.586885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9841f9ab4e94700c5d5ec7caf50e283e41b3f4fb64dba9e04f48af2a8f9bce9a-rootfs.mount: Deactivated successfully. May 7 23:48:53.930453 containerd[1741]: time="2025-05-07T23:48:53.930309992Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 7 23:48:53.975822 containerd[1741]: time="2025-05-07T23:48:53.975737279Z" level=info msg="CreateContainer within sandbox \"52aa5473138b7d5789c89ec1dad5ddb4f6b6d9a34ab797356bfb6dbc69672006\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c9ea5a965568f1b76f8b908e7f4963826e19865b1d41d15c505c166c53ce809\"" May 7 23:48:53.976604 containerd[1741]: time="2025-05-07T23:48:53.976431119Z" level=info msg="StartContainer for \"8c9ea5a965568f1b76f8b908e7f4963826e19865b1d41d15c505c166c53ce809\"" May 7 23:48:54.004225 systemd[1]: Started cri-containerd-8c9ea5a965568f1b76f8b908e7f4963826e19865b1d41d15c505c166c53ce809.scope - libcontainer container 8c9ea5a965568f1b76f8b908e7f4963826e19865b1d41d15c505c166c53ce809. May 7 23:48:54.038313 containerd[1741]: time="2025-05-07T23:48:54.038260075Z" level=info msg="StartContainer for \"8c9ea5a965568f1b76f8b908e7f4963826e19865b1d41d15c505c166c53ce809\" returns successfully" May 7 23:48:54.408094 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 7 23:48:57.080595 systemd-networkd[1428]: lxc_health: Link UP May 7 23:48:57.090108 systemd-networkd[1428]: lxc_health: Gained carrier May 7 23:48:57.752061 kubelet[3298]: I0507 23:48:57.751984 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9bszp" podStartSLOduration=8.751968421 podStartE2EDuration="8.751968421s" podCreationTimestamp="2025-05-07 23:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:48:54.954495362 +0000 UTC m=+198.576929395" watchObservedRunningTime="2025-05-07 23:48:57.751968421 +0000 UTC m=+201.374402374" May 7 23:48:58.792297 systemd-networkd[1428]: lxc_health: Gained IPv6LL May 7 23:49:03.899063 sshd[5209]: Connection closed by 10.200.16.10 port 47780 May 7 23:49:03.899504 sshd-session[5206]: pam_unix(sshd:session): session closed for user core May 7 23:49:03.902875 systemd[1]: sshd@26-10.200.20.32:22-10.200.16.10:47780.service: Deactivated successfully. May 7 23:49:03.904827 systemd[1]: session-29.scope: Deactivated successfully. May 7 23:49:03.905824 systemd-logind[1703]: Session 29 logged out. Waiting for processes to exit. May 7 23:49:03.906978 systemd-logind[1703]: Removed session 29.